30 Comments

Yudkowsky's panic is very similar to the panic that made us abandon nuclear power and overreact to Covid.

Expand full comment

Overreact to Covid? While I agree on nuclear power, well over a million deaths in the US alone would point to a clear failure to react. Dontcha think?

Expand full comment

I'm neither old, fat, or have cardiovascular issues. So, once it became clear that this is like a common cold for me, circa late March, 2020, everything seemed like an insane overreaction.

I live in Hungary though.

If Yudkowsky was at peace with his own mortality (I assume he's an insufferable online atheist), maybe he wouldn't be so hysterical.

(I'm a radical believer in clarity)

Expand full comment

Being "at peace with [your] own mortality" is a bit of a tell on where you might draw the line for what is an appropriate intervention in the name of safety. Most people consider staying alive to be of utmost importance.

If death is an acceptable outcome to you, please understand that most of society does not agree.

Real trouble can arise when one projects one's own comfort with death onto an outside population.

Expand full comment

That is true, but certain professions benefit from this mindset, where panicking over one's mortality is instantly counter-productive.

Take pilot or a firefighter: neither benefits from panicking, let alone the people whose lives depend on them.

Yudkowsky wants to do away with electricity, because your house might burn down because of an electrical fire, or you might die of an electric shock. Risks that make life better if taken, compared to existing in a world without electricity.

Few things scare me more than being in a panicking crowd. Fear and disgust, or disappointment? All of them. I'm not concerned about the fears or opinions of a hysterical mob, I just want to be as far away from them as possible.

Expand full comment

To take this analogy one step further: you're the pilot in a hairy situation. It's becoming apparent to the passengers that this flight is unusual. Some begin to panic.

One of the loudest voices (screaming "WE'RE ALL GONNA DIE") is Mensa-member and accredited pilot, Yudkowsky.

You're the actual captain on this flight though, he's just a passenger. So, will you concentrate on your job, and trust your judgement, or will you invite Yudkowsky into the cockpit to give his input on how to fly the plane?

Of course not. He might be smart, he might be a pilot, but he lost his cool. When - IF! - we land, I want his license to be revoked. He's unfit to fly.

(Note: in this analogy I rely on survivor's bias: that is, if you crash the plane, Yudkowsky was - somewhat - right, we all died. We can only mock him in a future where AGI won't kill us all, as he predicts. But our present is already impacted by survivor's bias, we're on the unlikely branch of possible outcomes where 1983 didn't escalate into a global thermonuclear war, multiplied (a multiplier less than one) by other possible extinction events that never came to pass. So, why bother worrying about anything, really? Just enjoy your flight! The future observer is always right; they must be, because the naysayers who were right in their own, extinct branches aren't present any more to object.)

Expand full comment

Pilots have thousands of hours of flight time and comercial flights are a routine activity where the many possible failure modes are well known and in fact have already happened and been studied extensively.

It’s not clear to me that there is anyone working on AGI that is in an analogous situation to the pilot in this story. Expertise is developed via many repetitions with unambiguous feedback. Literally nobody has ever created AGI before so nobody could have obtained such feedback.

Expand full comment

This post really helped me made sense of a recent interaction I had on another Substack. Some people who had seemingly reasonable object-level concerns about near-term AI risks were bizarrely insistent on treating other people with seemingly closely related longer-term concerns as not only competitors or even enemies, but kind of beneath contempt and dismissible by name-calling and handwaving. And then when I pointed out how baffling and politicized this was they responded to say no it's not baffling and politicized and also the other team started it.

My own view is that Chernobylism is obviously correct and we are basically certain to experience at least *some* level of "AI Chernobyl" type catastrophes, both literal discrete AI calamities and also deleterious longer-term social trends. But I don't see how this is incompatible or even competitive with classical "doomerist" worries about Skynet killing us all. That stuff is just not obviously dismissible in the way that my friends from the other Substack wanted to pretend, and since the losses in a completed "Skynet" scenario would be *literally a million Chernobyls at a conservative estimate, and arguably a lot more*, the possibility only needs to be "not obviously dismissible" for it to automatically become an extremely serious concern.

Expand full comment

Indeed, by many ways of measuring human suffering, Yudkowsky's diamondoid nanomachines killing everyone painlessly and at once is the best possible AI outcome.

Expand full comment

To insert tongue slightly in cheek:

The "AI ethics" tribe fears a superGPT will be used by racist humans.

The Xrisk tribe fears a superGPT will be racist against humans.

Notwithstanding that, come The Revolution, the "AI ethics" folks would send me to a re-education camp too, I do think they have some valid points. And just imagine the inevitable reaction if any of them called for airstrikes! Of course, "eugenics" etc. gets used as buzzwords. But let's be fair, there's a long long history of using supposed Science as a cover for raw politics - there's a reason behind those buzzwords. Any similarity between the two above in terms of AI-is-bad shouldn't obscure the huge real-world difference.

Expand full comment

“if you say the models “hallucinate” then that is ableist language“

GPT-4: it’s even less retarded!

Expand full comment

Your framing of e/acc as a maverick minority does not match with national governments investing in AI capabilities research, massive effort by most academic non-AI researchers to boost the AI angle of their work, and the current investment decisions of large investors. The Butlerians similarly frame their stance as a courageous minority, this time opposing The Machine. Or are you suggesting a split 50/50 into two "plucky rebel" camps fighting each other?

Expand full comment
deletedApr 1, 2023·edited Apr 1, 2023
Comment deleted
Expand full comment

The kind of people who are likely to believe in e/acc as a principle are, I contend, the kind of people who prefer doing over talking. Most engineers I know are not especially interested in what the twitterati are saying. Certainly among the folks I know in ML most are e/acc and don't spend time on social media or shouting into the microphones. They are too busy coding.

Expand full comment

People will do battle for their emerging digital gods. But this is why i'm here, good stuff

"The tribal signifiers on AI safety Twitter are all over the place. Just today, in fact, the rationalist, evopsych, gender critical scholar Geoffrey Miller was backing woke AI ethicist Gary Marcus in a thread on the AI letter, and in opposition to both was an Antifa, tankie, he/him account who’s worried about the anti-democratic implications of a technocrat-run AI control regime… and off to the side is me, an anti-woke, anti-tankie, pronoun disrespecter cheering he/him on because this aggression cannot stand, man."

Expand full comment

I just want the digital assistant-equivalent from Frankowski's "A Boy and His Tank." Is that too much to ask for?

Expand full comment

This is a superb framing of the current tribes. Unfortunately, I have to agree that we accelerationists are increasingly outnumbered.

Expand full comment

This EY quote is about as out of context as anything can be. The way it's presented it's implied that he's advocating for violence as a means of activism, which is contrary to everything he's ever written or said on the matter. If you just read the piece, you'll find he's merely advocating a trans-national agreement which is enforceable by force *once agreed*. There is a world of difference between the two.

Expand full comment
Comment deleted
Expand full comment

Living in civilised society means accepting that laws will be enforced upon you with force of need be. To call for individuals to unilaterally use force for their own ends is plain terrorism, but to call on society to collectively decide that some things are serious enough to warrant military force as a deterrent is qualitatively different.

To take a spin on your example, if you're killed by a vegan activist for eating meat everyone is entitled to view your death as unjust and illegitimate. But if you're killed by the state after it's been decided that the penalty for eating meat is capital punishment, then individuals may still view it as unjust, but not as illegitimate.

Generally, if a convention is already in place with the force of law, you're responsible for the consequences of you break the law.

Expand full comment

So, purely hypothetically, of course, if a majority-elected government passes a law requiring the extermination of an entire group of people and proceeds to do it, that's not violence?

Expand full comment

The extermination is violence. But violence by individuals is not the same, and isn't held to the same standards as violence by the State. And what's being criticised in the post is at least two degrees removed from that: it's someone arguing that a collective of States should use violence under very special conditions for a very special purpose.

Also, I find this analogy is further muddled by the specific example because to outlaw _doing_ something is very different from outlawing _being_ something. The distinction between the two is not always obvious, but in the case of AI development it is as clear cut as can be.

Expand full comment

The important question here is not so much who the tribes are, but:

1) Who the tribes are among the people actually doing the research.

2) Are any of the tribes smart and motivated enough to actually learn how to train neural nets.

Expand full comment

Dave Chapman is worried about AI social manipulation via social networks, or worse, societal collapse.

> It would not be clearly wrong to say that in 2016, AI chose the president. Whether you love or loathe Trump, is that the way you want future government leaders selected?

This would seemingly put him in both the “language police” and “Chernobylist” camps?

Expand full comment

David Chapman also wrote https://meaningness.com/virtue-court

Maybe he simultaneously thinks

- AI helped choose the president AND

- who the president is doesn't matter very much

?

Expand full comment

Why not imagine a superintelligence that loves us? Why, because it's not guaranteed. Sure, rogue superintelligence isn't either, but that's the uncertainty of the whole situation. We are allowed to anticipate possible outcomes of our decisions, we don't only learn by having things blow up in our faces, and since in this case, the blow up could be very bad indeed...

Is Chapman a Butlerian? He did call his book Better Without AI, and he argued in it we don't really need AI.

For what it's worth, I think most of the risk is on some lunatic using AI to destroy the world, than AI destroying the world on its own volition. It only takes one weirdo who read too much bad philosophy and concluded life is suffering and shouldn't exist (there are such people). An AI could help him figure out how to end life.

Expand full comment

The obsession with model size has two interesting aspects.

The first is a parallel obsession (by many of the same people) with a few specific numbers in modern CPUs (cache size, branch predictor size, ROB size, and so on). After having interacted with people for many years, it's become clear to me that for most people these numbers have mere totemic significance. You can try all you like to explain that the advances of company A are, most importantly, in particular *algorithms* as to how they use this raw material of caches or branch predictors; it will make no difference. They don't especially know what the numbers mean in a technical sense and, even more importantly, they don't *care*; the numbers exists purely to fulfill a shibboleth role, to indicate that their tribe is doing better or worse than the opposing tribe.

So Altman may be strategic on this front. His primary concern may be less about informing competitors of anything than of exciting the fires of tribal fury. As soon as a signal number ("model parameter count") becomes a tribal shibboleth, for most participants in this culture war it becomes unmoored from reality and all that matters is whether it's growing ("bad!" for many of them) or not.

The second is how can models become better without growing?

There are at least two obvious directions. The immediate one (which we're already going down) is offloading system 2 thinking to the machines (eg Google or Wolfram) that can do it a lot better than an LLM. If someone asks you a fact, or arithmetic, don't try to synthesize it from your neural net, know enough to recognize the type of question and look up the answer in the right place. That's already 50% of the difference between a person educated enough to know when to use "the library" vs a person convinced their vague recollection of something is probablu close enough to the correct answer.

The second obvious direction is that flat attention (ie looking at more and more words backwards) doesn't scale. Take a hint from how TAGE does it (for branch, or more general, pattern prediction) or how humans do it, with a geometrically increasing "window" on past text. What humans seem to do is that not just individual words (and word fragments) are "embedded" (ie placed in a space relative to other words) but so are sentences, paragraphs, sections and so on. Apart from recursion, our other language superpower is chunking, but LLMs (at least as far as I can tell) do not yet do any "designed" chunking, only whatever small-scale chunking they might get by accident as a side effect of the LLM training. So even if GPT4's only real innovation is a "second round" of embedding at the sentence rather than the word-fragment level, that should already be enough to substantially improve it; and of course the obvious next thing once that works is to recurse it to paragraphs and larger semantic structures.

(This is apart from the sort of trivial low level parameter tweaking and network layer restructuring that will be on-going for many years.)

Expand full comment

Where is the camp that agrees we need to continue progressing AI while having rational discussions about the rapid change that will occur in society and taking action (would you categorize them/me as part of your camp)? It seems to me that arguing about a possible end to life and if intelligence exists doesn’t prepare us for the clear implications on work, the environment and life as we know it of what is already deployed today. Shall we also go back to debating if climate change exists and what to call it rather than actually doing something?

Expand full comment

Jon, new to the newsletter and the community, thanks for posting this. As a habitual fence-sitter/consensus builder, not sure I have a camp, I'm very much "well, they all have some good points and some ridiculous points."

Curious how you would integrate Scott Aaronson and his post on "Orthodox vs Reform" AI Risk (https://scottaaronson.blog/?p=6821) into your ethnography. I'd guess pretty close to your views.

Expand full comment