25 Comments
Mar 7, 2023Liked by Jon Stokes

A couple of suggestions for your first ("How it works") slide. The first step is to collect and curate training data. You don't just snap your fingers and have it magically appear. In the second (now third) stage (fine tuning) you left out a crucial input: human judgement, which you need to source and vet. For inference, the inputs are the tuned model weights and some sort of prompt (i.e., compared to the earlier stages, inference doesn't work in a vaccuum).

Expand full comment

Really good article, I agree on almost all of the reasons for why it would be hard (I don't exactly reach the same conclusion about where we end up, because there are countermeasures, and specific policies we can do which reduce risk from AI in the phase where it's only accessible to large companies and not all, and it's possible to scale up over time to centralized regulation).

I like your last analogy.

If you were deeply convinced there was a >5% chance that in 10 years any computer could with a download become a new unique pandemic virus, I assume you'd be on board with centralizing and limiting access ? (Or is that sacred and there is no amount of believed risk which would warrant control of compute?)

Your world model seems good and your arguments are good so I'd be interested to discuss where you leave the boat for the object level conversation of "In the current trajectory, AI systems will proliferate and become more and more capable, while our capacity for control will lag behind, until a takeover happens and humanity is forever disempowered". You can find a list of specific threat scenarios summaries made by the Deepmind team here if you want some specifics of what could go wrong : https://www.lesswrong.com/posts/wnnkD6P2k2TfHnNmt/threat-model-literature-review

If when reading them you find yourself having compelling counter arguments for why that can't happen and it's still better to accelerate than to try to minimize those risks, I'd love to hear about it!

Expand full comment

Stopping the training phase would be both necessary and sufficient. It is necessary because the other approaches are impossible, for the reasons you lay out and probably many others. It is sufficient because despite all of the current hype about the capabilities of current models, most of that hype has a footnote saying "Well, to be fair, the current models have LOTS of problems and are not going to take over. BUT LOOK HOW FAST THINGS ARE PROGRESSING! JUST YOU WAIT!" So much of their fear mongering is forward-looking. Is halting training possible? Well, I think throttling access to giant clusters of GPU/TPUs, if it were possible, would be more effective than you think. Gigantic deep learning models are not very amenable to distributed training, so unless some breakthrough occurs these bigger and bigger models are increasingly likely to require ever-huger compute clusters to train.

Expand full comment

The GPU time used for AI training is dwarfed by crypto mining.

Expand full comment

Not sure I see the relevance of your comment to anything in mine...

Expand full comment

The feasibility of global regulation of massive GPU farms.

Expand full comment

It seems like making sure inference is slow and expensive could be a useful stopgap for promoting human oversight of AI? Especially since it actually is slow and expensive. (You can run Stable Diffusion on your MacBook, but there’s an opportunity cost if you want to use your laptop for other things.)

If you’re worried about GPT4, it might help to know that it runs at human speeds. Sometimes much slower, since we can copy and paste. Currently it’s about the speed of a 300 baud modem, reminding me of connecting to a BBS back in the ‘80’s.

A meaningful way to “pause” might be to make sure it doesn’t get to superhuman speeds. This should be a lot easier to measure and regulate than intelligence.

Discussed briefly here: https://skybrian.substack.com/p/ai-chats-are-turn-based-games

Expand full comment

> You could surely keep most people from accessing model files by getting a few large tech platforms to agree to treat those files as digital contraband — basically, the same treatment they give porn, spam, and malware.

Big tech companies already do this with digital piracy but haven't come close to eliminating it. Porn in general is a bad example because it's excluded by social media &c but you're still allowed to put it on your own website or send it by private communications; to take a more extreme case, child porn being universally treated as "digital contraband" by corporations hasn't eliminated it from the internet either (judging by the frequency of prosecutions for having it). What makes you think tech companies would have more success with trying to ban AI models in the same way?

Expand full comment

The shortest answer to why AI development won't be shut down in the West is that it would automatically give China a leg up in the game.

Whatever penalty the US is trying to achieve restricting high-end chips from getting to China would be rendered moot. Now China can just use whatever custom silicon they can fab and it will beat a lobotomized West in AI.

Expand full comment

Socrates from https://www.armstrongeconomics.com/about

Armstrong Economics offers unique perspective intended to educate the general public and organizations on the underlying trends within the global economic and political environment. Our mission is to research historical cyclical patterns and market behavior in timing, price and crisis to better understand and identify potential future trends, using an extensive monetary database and advanced proprietary models.

I don't pay for or invest from their analysis. The freebies are enough for me!

Expand full comment
Mar 8, 2023·edited Mar 8, 2023

Actually, Stable Diffusion 1.0 cost only $600k to train, and Emad paid the training costs himself

https://twitter.com/jackclarkSF/status/1563957173062758401

Expand full comment

Talks of a pause or regulation strike me as almost a self-evidently destructive waste of time, for substantially the reasons here https://open.substack.com/pub/joecanimal/p/tldr-existential-ai-risk-research?utm_source=share&utm_medium=android

Expand full comment

I looked up your article and you don't seem to be responding to any of the arguments made for why AI risk is important. The conversation about AI risk has been extensive and ignoring it means you're not actually contributing.

There's an introduction for why AI risk is real, existential and hard to avoid here : https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities

If you do a followup post that takes into account some of those points I'd be more interested, as it might contribute new things to the conversation.

Expand full comment

I'm not sure what point you think I should take into account. Is it the fantasy AI that doesn't exist, or the fantasy means to deal with it, or what?

Expand full comment

The risks described are indeed about AI systems which don't yet exist, since we are still in the development phase. The idea of AI Safety is to look at what behaviors the AI systems we're developing will have and to prepare countermeasures.

To not do so is like designing a plane, someone pointing out accident risks "a large flying bird caught in the motors could cause them to fail", and responding "planes are fantasy, and the safety precautions you're recommending are also fantasy".

If you don't believe we're currently building powerful AI (or a plane), then there's a specific dive into that here : https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to

Expand full comment

No, in fact the right analogy is to something like nuclear safety—overregulation due to real, but exaggerated, dangers kept us from the flying car and cheap energy. It is very suspicious that big players want to strangle this tech in the cradle and arrange so that only approved big incumbents and bad actors can use it.

Anyway, suffice to say that I'm familiar enough with Yud to not need you to link this sequence or that (what's next, his Harry Potter fanfic? Stuff on the dangers of AI math pets?)

My piece presumed familiarity with Yud's debate with Hanson about foom and the diffusion of info. Hanson won then, and empirical data since just strengthens the prior that there's no foom in the offing. But if there were, it cannot as a practical matter, i.e., in terms of political economy, be stopped.

You've addressed none of that. I don't think you can.

Ty for reading

Expand full comment

To switch computer off, you need to a) know it is doing something you really don't want it to do and b) know where physcially the computer and it's switch is. Former requires Alignment research, latter requires strict regulation of AI use. If we give unrestricted access to unpredictable AGI tech to everyone, someone WILL run it in a way it would do unspeakable harm, up to and including x-level one.

Expand full comment

Wut? Knowing how to turn a computer off requires alignment research? I just don't see it.

Expand full comment

The one incorrect idea here is that US government agencies would be advancing state-of-the-art models. This seems plausible in the medium and long run, but completely crazy in the short run. Given all the difficulties us federal agencies have faced with procurement so far, the idea that they'd be doing anything as cutting edge as GPT4 is just nonsense for the next few years.

Expand full comment

I don't buy that it's so difficult to stop AI. Erik Hoel, Katja Grace, and David Chapman all disagree. In particular, the training phase is very vulnerable to regulation (no, randos on the internet will not replicate OpenAI) and it takes some top talent to move the field forward, it's not something that the rest of the world can do.

Nobody is attempting human genetic engineering even though that is within reach, and when someone in China tried it, they got jailed. The same can happen to AI.

Expand full comment

"randos on the internet will not replicate OpenAI"

They're doing it, and they've made huge advances this past week.

Expand full comment

To the extent that people have done this, it's mostly by doing the same sort of machine-learning that OpenAI &al. did first. If creating anything like AGI requires both substantially new techniques (as even most of the AI doomers seem to think) & substantial computing resources to train the models, then "randos on the internet" would need to put the same amount of effort & resources, not into a method already known to work, but into a new & unproven idea (& probably a lot of them until they find something that works). This seems rather less likely.

Expand full comment

Who do you mean? They have something that can compete with ChatGPT? How did they afford the training?

Expand full comment

I dumped a massive effortpost on the topic this afternoon.

Facebook payed for most of it, but they also had to spend $500 on producing training material and $100 on fine-tuning. So the cost of 2 dozen eggs.

Expand full comment

Complete lunatic, I thought you actually knew what ml was capable of and the amount of distribution shift that takes place, you are insane to think these models are a problem, people are extremely easy to fool with crude methods already. But ya, good point, make more talking points on regulation large functions that can generate text and images, it’s likely the case you are actively fear mongering and diverting from the fact that you are exploiting and manufacturing fear so people pay attention to you and your substack, just code and actually work with this stuff and download a model. I can’t believe I thought you were fairly level headed, didn’t you see the past 3 years that a crappy virus leaked or from the bat caused insane amounts of havoc? You think junk text and content will be a problem? No, it won’t, all information channels and social media sites log and track and trace everything, anytime trying to do any campaigns can be caught easily, main dangers that actually matter are viruses that can spring out of a few animals or a potential lab and cause widespread chaos like we have had, we just got lucky it wasn’t wiping out more people and didn’t evolve too dangerously. And here you are going about AI, and fear mongering, we’ve seen this a million times, yes more junk content will increase but it’s detectable and we’lll be fine, please start putting the flashlight on real dangers instead of farming for attention about ai dangers that haven’t planned out and are unlikely to as we (MURICA’ 🦅🇺🇸) have agencies and corporations that have all the data to mitigate any boogeyman that wants to incite some magatards about whatever conspiracy. The dangers are cyber, the information and narrative dangers are way overstated as you can still handle a bunch of misinformed sheep, but invisible viruses are a much tougher enemy. Have faith in our defense and intelligence agencies to handle and use whatever strategies needed to dismantle harmful narratives, you are simply wasting people’s attention on a non issue, you can’t stop linear algebra, relax, our people will learn quickly that content is mostly fake and we’ll be fine, have some faith. Take a break from the speed and go to bar and meet some normies.

Expand full comment