I don't know if it's possible or what the timelines are, and neither does anyone else. So I don't tend to get worked up about it, unless someone is trying to control me with p(doom) scare stories.
Interesting perspective. Also one I am highly sympathetic to. I'm currently writing a series of critical essays on the "singularity" concept that you might enjoy. Although I'm known as an OG transhumanist, I have strong doubts about the ability of LLM models to achieve superintelligence. I'm also very doubtful about the idea of "intelligence" a single, measurable thing.
I part ways with you on Christian theology. Not because I don't understand -- I have a long-standing interest in philosophy of religion and used to teach it. Have you written about your reasons for taking it seriously? I would also be curious if you have thoughts on Frank Tipler's Physics of Immortality and Physics of Christianity.
You got me on a brief rabbit trail here on Tipler - a brief dialogue with GPT5 has me thinking neither respectable materialists nor Christians see much good in his work though. Basically trying to explain metaphysics with physics yet without being physics-only and being rejected by the physics-only folks. ðŽ
I agree that everything about this space is radically uncertain. We donât know how or when things will develop. We donât even have very useful definitions for AGI or ASI, in my opinion. If Iâm reading you right, you take that to mean that we donât need to think about catastrophic scenarios. That part I donât get at all. If weâre not sure whether a super volcano under Yellowstone will blow up the country, or whether there will be a nuclear war, we should definitely spend some effort figuring those risks out and trying to prevent them! I think the same is true of AI risk.
Please read If Anyone Builds It, Everyone Dies then write a V2 of this article ð
They make some incredibly calm and rational arguments.
For instance, we don't need intelligence to be exponential. We just need machines to be significantly better. This is a an easy call. Look at chess. No human can beat the best chess engine in the world.
Interesting perspective. Also one I am highly sympathetic to. I'm currently writing a series of critical essays on the "singularity" concept that you might enjoy. Although I'm known as an OG transhumanist, I have strong doubts about the ability of LLM models to achieve superintelligence. I'm also very doubtful about the idea of "intelligence" a single, measurable thing.
I part ways with you on Christian theology. Not because I don't understand -- I have a long-standing interest in philosophy of religion and used to teach it. Have you written about your reasons for taking it seriously? I would also be curious if you have thoughts on Frank Tipler's Physics of Immortality and Physics of Christianity.
You got me on a brief rabbit trail here on Tipler - a brief dialogue with GPT5 has me thinking neither respectable materialists nor Christians see much good in his work though. Basically trying to explain metaphysics with physics yet without being physics-only and being rejected by the physics-only folks. ðŽ
Quit making sense, Jon! ðĪŠ Calmness don't feed the bulldog. ð
When you say my side, what do you mean?
I agree that everything about this space is radically uncertain. We donât know how or when things will develop. We donât even have very useful definitions for AGI or ASI, in my opinion. If Iâm reading you right, you take that to mean that we donât need to think about catastrophic scenarios. That part I donât get at all. If weâre not sure whether a super volcano under Yellowstone will blow up the country, or whether there will be a nuclear war, we should definitely spend some effort figuring those risks out and trying to prevent them! I think the same is true of AI risk.
Please read If Anyone Builds It, Everyone Dies then write a V2 of this article ð
They make some incredibly calm and rational arguments.
For instance, we don't need intelligence to be exponential. We just need machines to be significantly better. This is a an easy call. Look at chess. No human can beat the best chess engine in the world.