30 Comments

Yudkowsky's panic is very similar to the panic that made us abandon nuclear power and overreact to Covid.

Expand full comment

This post really helped me made sense of a recent interaction I had on another Substack. Some people who had seemingly reasonable object-level concerns about near-term AI risks were bizarrely insistent on treating other people with seemingly closely related longer-term concerns as not only competitors or even enemies, but kind of beneath contempt and dismissible by name-calling and handwaving. And then when I pointed out how baffling and politicized this was they responded to say no it's not baffling and politicized and also the other team started it.

My own view is that Chernobylism is obviously correct and we are basically certain to experience at least *some* level of "AI Chernobyl" type catastrophes, both literal discrete AI calamities and also deleterious longer-term social trends. But I don't see how this is incompatible or even competitive with classical "doomerist" worries about Skynet killing us all. That stuff is just not obviously dismissible in the way that my friends from the other Substack wanted to pretend, and since the losses in a completed "Skynet" scenario would be *literally a million Chernobyls at a conservative estimate, and arguably a lot more*, the possibility only needs to be "not obviously dismissible" for it to automatically become an extremely serious concern.

Expand full comment

To insert tongue slightly in cheek:

The "AI ethics" tribe fears a superGPT will be used by racist humans.

The Xrisk tribe fears a superGPT will be racist against humans.

Notwithstanding that, come The Revolution, the "AI ethics" folks would send me to a re-education camp too, I do think they have some valid points. And just imagine the inevitable reaction if any of them called for airstrikes! Of course, "eugenics" etc. gets used as buzzwords. But let's be fair, there's a long long history of using supposed Science as a cover for raw politics - there's a reason behind those buzzwords. Any similarity between the two above in terms of AI-is-bad shouldn't obscure the huge real-world difference.

Expand full comment

“if you say the models “hallucinate” then that is ableist language“

GPT-4: it’s even less retarded!

Expand full comment

Your framing of e/acc as a maverick minority does not match with national governments investing in AI capabilities research, massive effort by most academic non-AI researchers to boost the AI angle of their work, and the current investment decisions of large investors. The Butlerians similarly frame their stance as a courageous minority, this time opposing The Machine. Or are you suggesting a split 50/50 into two "plucky rebel" camps fighting each other?

Expand full comment

People will do battle for their emerging digital gods. But this is why i'm here, good stuff

"The tribal signifiers on AI safety Twitter are all over the place. Just today, in fact, the rationalist, evopsych, gender critical scholar Geoffrey Miller was backing woke AI ethicist Gary Marcus in a thread on the AI letter, and in opposition to both was an Antifa, tankie, he/him account who’s worried about the anti-democratic implications of a technocrat-run AI control regime… and off to the side is me, an anti-woke, anti-tankie, pronoun disrespecter cheering he/him on because this aggression cannot stand, man."

Expand full comment

I just want the digital assistant-equivalent from Frankowski's "A Boy and His Tank." Is that too much to ask for?

Expand full comment

This is a superb framing of the current tribes. Unfortunately, I have to agree that we accelerationists are increasingly outnumbered.

Expand full comment

This EY quote is about as out of context as anything can be. The way it's presented it's implied that he's advocating for violence as a means of activism, which is contrary to everything he's ever written or said on the matter. If you just read the piece, you'll find he's merely advocating a trans-national agreement which is enforceable by force *once agreed*. There is a world of difference between the two.

Expand full comment

The important question here is not so much who the tribes are, but:

1) Who the tribes are among the people actually doing the research.

2) Are any of the tribes smart and motivated enough to actually learn how to train neural nets.

Expand full comment

Dave Chapman is worried about AI social manipulation via social networks, or worse, societal collapse.

> It would not be clearly wrong to say that in 2016, AI chose the president. Whether you love or loathe Trump, is that the way you want future government leaders selected?

This would seemingly put him in both the “language police” and “Chernobylist” camps?

Expand full comment

Why not imagine a superintelligence that loves us? Why, because it's not guaranteed. Sure, rogue superintelligence isn't either, but that's the uncertainty of the whole situation. We are allowed to anticipate possible outcomes of our decisions, we don't only learn by having things blow up in our faces, and since in this case, the blow up could be very bad indeed...

Is Chapman a Butlerian? He did call his book Better Without AI, and he argued in it we don't really need AI.

For what it's worth, I think most of the risk is on some lunatic using AI to destroy the world, than AI destroying the world on its own volition. It only takes one weirdo who read too much bad philosophy and concluded life is suffering and shouldn't exist (there are such people). An AI could help him figure out how to end life.

Expand full comment

The obsession with model size has two interesting aspects.

The first is a parallel obsession (by many of the same people) with a few specific numbers in modern CPUs (cache size, branch predictor size, ROB size, and so on). After having interacted with people for many years, it's become clear to me that for most people these numbers have mere totemic significance. You can try all you like to explain that the advances of company A are, most importantly, in particular *algorithms* as to how they use this raw material of caches or branch predictors; it will make no difference. They don't especially know what the numbers mean in a technical sense and, even more importantly, they don't *care*; the numbers exists purely to fulfill a shibboleth role, to indicate that their tribe is doing better or worse than the opposing tribe.

So Altman may be strategic on this front. His primary concern may be less about informing competitors of anything than of exciting the fires of tribal fury. As soon as a signal number ("model parameter count") becomes a tribal shibboleth, for most participants in this culture war it becomes unmoored from reality and all that matters is whether it's growing ("bad!" for many of them) or not.

The second is how can models become better without growing?

There are at least two obvious directions. The immediate one (which we're already going down) is offloading system 2 thinking to the machines (eg Google or Wolfram) that can do it a lot better than an LLM. If someone asks you a fact, or arithmetic, don't try to synthesize it from your neural net, know enough to recognize the type of question and look up the answer in the right place. That's already 50% of the difference between a person educated enough to know when to use "the library" vs a person convinced their vague recollection of something is probablu close enough to the correct answer.

The second obvious direction is that flat attention (ie looking at more and more words backwards) doesn't scale. Take a hint from how TAGE does it (for branch, or more general, pattern prediction) or how humans do it, with a geometrically increasing "window" on past text. What humans seem to do is that not just individual words (and word fragments) are "embedded" (ie placed in a space relative to other words) but so are sentences, paragraphs, sections and so on. Apart from recursion, our other language superpower is chunking, but LLMs (at least as far as I can tell) do not yet do any "designed" chunking, only whatever small-scale chunking they might get by accident as a side effect of the LLM training. So even if GPT4's only real innovation is a "second round" of embedding at the sentence rather than the word-fragment level, that should already be enough to substantially improve it; and of course the obvious next thing once that works is to recurse it to paragraphs and larger semantic structures.

(This is apart from the sort of trivial low level parameter tweaking and network layer restructuring that will be on-going for many years.)

Expand full comment

Where is the camp that agrees we need to continue progressing AI while having rational discussions about the rapid change that will occur in society and taking action (would you categorize them/me as part of your camp)? It seems to me that arguing about a possible end to life and if intelligence exists doesn’t prepare us for the clear implications on work, the environment and life as we know it of what is already deployed today. Shall we also go back to debating if climate change exists and what to call it rather than actually doing something?

Expand full comment

Jon, new to the newsletter and the community, thanks for posting this. As a habitual fence-sitter/consensus builder, not sure I have a camp, I'm very much "well, they all have some good points and some ridiculous points."

Curious how you would integrate Scott Aaronson and his post on "Orthodox vs Reform" AI Risk (https://scottaaronson.blog/?p=6821) into your ethnography. I'd guess pretty close to your views.

Expand full comment