5 Comments
User's avatar
Max More's avatar

Interesting perspective. Also one I am highly sympathetic to. I'm currently writing a series of critical essays on the "singularity" concept that you might enjoy. Although I'm known as an OG transhumanist, I have strong doubts about the ability of LLM models to achieve superintelligence. I'm also very doubtful about the idea of "intelligence" a single, measurable thing.

I part ways with you on Christian theology. Not because I don't understand -- I have a long-standing interest in philosophy of religion and used to teach it. Have you written about your reasons for taking it seriously? I would also be curious if you have thoughts on Frank Tipler's Physics of Immortality and Physics of Christianity.

Expand full comment
Chase Stubblefield's avatar

You got me on a brief rabbit trail here on Tipler - a brief dialogue with GPT5 has me thinking neither respectable materialists nor Christians see much good in his work though. Basically trying to explain metaphysics with physics yet without being physics-only and being rejected by the physics-only folks. 😎

Expand full comment
Dave Reed's avatar

Quit making sense, Jon! ðŸĪŠ Calmness don't feed the bulldog. 😏

Expand full comment
Greg G's avatar

When you say my side, what do you mean?

I agree that everything about this space is radically uncertain. We don’t know how or when things will develop. We don’t even have very useful definitions for AGI or ASI, in my opinion. If I’m reading you right, you take that to mean that we don’t need to think about catastrophic scenarios. That part I don’t get at all. If we’re not sure whether a super volcano under Yellowstone will blow up the country, or whether there will be a nuclear war, we should definitely spend some effort figuring those risks out and trying to prevent them! I think the same is true of AI risk.

Expand full comment
Adam's avatar

Please read If Anyone Builds It, Everyone Dies then write a V2 of this article 🙏

They make some incredibly calm and rational arguments.

For instance, we don't need intelligence to be exponential. We just need machines to be significantly better. This is a an easy call. Look at chess. No human can beat the best chess engine in the world.

Expand full comment