25 Comments
Mar 7, 2023Liked by Jon Stokes

A couple of suggestions for your first ("How it works") slide. The first step is to collect and curate training data. You don't just snap your fingers and have it magically appear. In the second (now third) stage (fine tuning) you left out a crucial input: human judgement, which you need to source and vet. For inference, the inputs are the tuned model weights and some sort of prompt (i.e., compared to the earlier stages, inference doesn't work in a vaccuum).

Expand full comment

Really good article, I agree on almost all of the reasons for why it would be hard (I don't exactly reach the same conclusion about where we end up, because there are countermeasures, and specific policies we can do which reduce risk from AI in the phase where it's only accessible to large companies and not all, and it's possible to scale up over time to centralized regulation).

I like your last analogy.

If you were deeply convinced there was a >5% chance that in 10 years any computer could with a download become a new unique pandemic virus, I assume you'd be on board with centralizing and limiting access ? (Or is that sacred and there is no amount of believed risk which would warrant control of compute?)

Your world model seems good and your arguments are good so I'd be interested to discuss where you leave the boat for the object level conversation of "In the current trajectory, AI systems will proliferate and become more and more capable, while our capacity for control will lag behind, until a takeover happens and humanity is forever disempowered". You can find a list of specific threat scenarios summaries made by the Deepmind team here if you want some specifics of what could go wrong : https://www.lesswrong.com/posts/wnnkD6P2k2TfHnNmt/threat-model-literature-review

If when reading them you find yourself having compelling counter arguments for why that can't happen and it's still better to accelerate than to try to minimize those risks, I'd love to hear about it!

Expand full comment

Stopping the training phase would be both necessary and sufficient. It is necessary because the other approaches are impossible, for the reasons you lay out and probably many others. It is sufficient because despite all of the current hype about the capabilities of current models, most of that hype has a footnote saying "Well, to be fair, the current models have LOTS of problems and are not going to take over. BUT LOOK HOW FAST THINGS ARE PROGRESSING! JUST YOU WAIT!" So much of their fear mongering is forward-looking. Is halting training possible? Well, I think throttling access to giant clusters of GPU/TPUs, if it were possible, would be more effective than you think. Gigantic deep learning models are not very amenable to distributed training, so unless some breakthrough occurs these bigger and bigger models are increasingly likely to require ever-huger compute clusters to train.

Expand full comment

It seems like making sure inference is slow and expensive could be a useful stopgap for promoting human oversight of AI? Especially since it actually is slow and expensive. (You can run Stable Diffusion on your MacBook, but there’s an opportunity cost if you want to use your laptop for other things.)

If you’re worried about GPT4, it might help to know that it runs at human speeds. Sometimes much slower, since we can copy and paste. Currently it’s about the speed of a 300 baud modem, reminding me of connecting to a BBS back in the ‘80’s.

A meaningful way to “pause” might be to make sure it doesn’t get to superhuman speeds. This should be a lot easier to measure and regulate than intelligence.

Discussed briefly here: https://skybrian.substack.com/p/ai-chats-are-turn-based-games

Expand full comment

> You could surely keep most people from accessing model files by getting a few large tech platforms to agree to treat those files as digital contraband — basically, the same treatment they give porn, spam, and malware.

Big tech companies already do this with digital piracy but haven't come close to eliminating it. Porn in general is a bad example because it's excluded by social media &c but you're still allowed to put it on your own website or send it by private communications; to take a more extreme case, child porn being universally treated as "digital contraband" by corporations hasn't eliminated it from the internet either (judging by the frequency of prosecutions for having it). What makes you think tech companies would have more success with trying to ban AI models in the same way?

Expand full comment

The shortest answer to why AI development won't be shut down in the West is that it would automatically give China a leg up in the game.

Whatever penalty the US is trying to achieve restricting high-end chips from getting to China would be rendered moot. Now China can just use whatever custom silicon they can fab and it will beat a lobotomized West in AI.

Expand full comment

Socrates from https://www.armstrongeconomics.com/about

Armstrong Economics offers unique perspective intended to educate the general public and organizations on the underlying trends within the global economic and political environment. Our mission is to research historical cyclical patterns and market behavior in timing, price and crisis to better understand and identify potential future trends, using an extensive monetary database and advanced proprietary models.

I don't pay for or invest from their analysis. The freebies are enough for me!

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Actually, Stable Diffusion 1.0 cost only $600k to train, and Emad paid the training costs himself

https://twitter.com/jackclarkSF/status/1563957173062758401

Expand full comment

Talks of a pause or regulation strike me as almost a self-evidently destructive waste of time, for substantially the reasons here https://open.substack.com/pub/joecanimal/p/tldr-existential-ai-risk-research?utm_source=share&utm_medium=android

Expand full comment

The one incorrect idea here is that US government agencies would be advancing state-of-the-art models. This seems plausible in the medium and long run, but completely crazy in the short run. Given all the difficulties us federal agencies have faced with procurement so far, the idea that they'd be doing anything as cutting edge as GPT4 is just nonsense for the next few years.

Expand full comment

I don't buy that it's so difficult to stop AI. Erik Hoel, Katja Grace, and David Chapman all disagree. In particular, the training phase is very vulnerable to regulation (no, randos on the internet will not replicate OpenAI) and it takes some top talent to move the field forward, it's not something that the rest of the world can do.

Nobody is attempting human genetic engineering even though that is within reach, and when someone in China tried it, they got jailed. The same can happen to AI.

Expand full comment

Complete lunatic, I thought you actually knew what ml was capable of and the amount of distribution shift that takes place, you are insane to think these models are a problem, people are extremely easy to fool with crude methods already. But ya, good point, make more talking points on regulation large functions that can generate text and images, it’s likely the case you are actively fear mongering and diverting from the fact that you are exploiting and manufacturing fear so people pay attention to you and your substack, just code and actually work with this stuff and download a model. I can’t believe I thought you were fairly level headed, didn’t you see the past 3 years that a crappy virus leaked or from the bat caused insane amounts of havoc? You think junk text and content will be a problem? No, it won’t, all information channels and social media sites log and track and trace everything, anytime trying to do any campaigns can be caught easily, main dangers that actually matter are viruses that can spring out of a few animals or a potential lab and cause widespread chaos like we have had, we just got lucky it wasn’t wiping out more people and didn’t evolve too dangerously. And here you are going about AI, and fear mongering, we’ve seen this a million times, yes more junk content will increase but it’s detectable and we’lll be fine, please start putting the flashlight on real dangers instead of farming for attention about ai dangers that haven’t planned out and are unlikely to as we (MURICA’ 🦅🇺🇸) have agencies and corporations that have all the data to mitigate any boogeyman that wants to incite some magatards about whatever conspiracy. The dangers are cyber, the information and narrative dangers are way overstated as you can still handle a bunch of misinformed sheep, but invisible viruses are a much tougher enemy. Have faith in our defense and intelligence agencies to handle and use whatever strategies needed to dismantle harmful narratives, you are simply wasting people’s attention on a non issue, you can’t stop linear algebra, relax, our people will learn quickly that content is mostly fake and we’ll be fine, have some faith. Take a break from the speed and go to bar and meet some normies.

Expand full comment