🦑 Regular readers have probably spotted Lovecraftian horror as a long-running theme in my posts. Among the very first images I ever generated with generative AI was a set of Midjourney beta generations of Cthulhu rising up from the ocean:
I’ve since sprinkled little bits of Lovecraftia (is that a word?) in my posts here and there, but I always felt like a bit of a poseur because I’d never actually gotten around to reading the man himself. All my Lovecraft lore came from RPGs and board games. At least, that was true until this past Halloween, when I finally sat down with a big omnibus edition of his complete works.
Actually reading Lovecraft himself had a weird effect on me, which I told some friends about but hadn’t otherwise written about publicly… until yesterday in:
Hard to know what to excerpt from this, because I pull out a number of contact points between Lovecraft and the present AI/algos/AGI discourse. I end up thinking about the issue of safety, not so much physical but spiritual and psychological:
But the alignment-related catastrophe that worries me is not the X-risk scenario where an AGI murders us all. Rather, I’m worried about an AGI that makes us not want to live anymore — an AGI that brings on a spiritual catastrophe by convincing us to simply give up because what is the point?
An astute RETURN reader and fellow substacker (writing at) actually posted a great comment on the piece, drawing an analogy to retirement. Retirement has weird (and sometimes deadly) effects on people. What happens when humanity essentially retires?
🧷 I also published a piece in City Journal on the topic of “AI safety,” making the point that this entire discussion is greatly hindered by the fact that, as a society, we’re currently contesting the definitions of both “intelligence” and “safety.”
The AI safety debate couldn’t have arrived at a worse time in our history. Both machine-learning researchers and our larger society are bitterly divided over what two of the discussion’s key terms—“intelligence” and “safety”—actually mean.
America’s post–George Floyd era “racial reckoning” has seen a rapid public rethink of what intelligence is and how it should or should not be measured. Colleges and professional schools are ditching standardized tests under pressure from equity advocates, who insist that these tests are slanted toward a narrow, racialized conception of intellectual competence that unfairly discounts what nonwhites have to offer universities and professional guilds.
But it’s not just our broader society that’s divided over the nature and meaning of intelligence. Researchers can’t agree on a rough working definition of this elusive concept to measure properly if or how their increasingly sophisticated machine-learning models are exhibiting more of it. Machine-learning experts offer competing definitions of “intelligence,” along with a variety of benchmarks for assessing it. Market leader OpenAI has its own, more practical definition of “artificial general intelligence”—“highly autonomous systems that outperform humans at most economically valuable work”—but even this is slippery enough to be contested.
I end up coming out against efforts to slow or pause progress in AI, which is no surprise to readers of this newsletter. I just can’t see unilaterally disarming in what is essentially a new arms race while we sort out the definitions of the words we’re using to fight each other over the role this new tech should play in our society. As I sometimes say on the bird site: don’t hate, accelerate.
Substack has launched its Notes product, and I’ve started posting there on occasion. The following note, for instance, is a tangent I cut out of the Lovecraft piece linked above:
I also have a subscriber chat enabled on Substack, but it’s not that active. Frankly, I’m not a fan of Substack’s on-site chat. There’s nothing wrong with it, per se, other than it’s just not Discord.
I’m currently using Discord for community and will continue to do so. In fact, I’m actually just now really ramping that up and will finally introduce the long-promised “paid subscriber”-only channels, soon.
You can log into my Discord here and go through the onboarding flow I’ve set up. It collects your email address so that in the future I can check it against your subscriber status (once I write the code for this) and give you the correct role.
My hope for the community is that it’ll be a resource for myself and others to stay current on up-to-the-minute AI news (acting as a filter for the constant firehose of product announcements, papers, and threads) and that it’ll also be a place where builders can gather and learn from one another.
jonstokes.com is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Thanks for the shoutout!
All the people in my life, save for maybe one, found retirement a blessing; let it be beekeeping, or just working "gig" jobs while practically backed by a safety net equal UBI, keeping the churn optional, their day-to-day perspective on life improved.
Pensioners working their hobbies or gigs are not too dissimilar to uni students enjoying making some extra cash at the local car wash.
Regarding the prospect of AI (not AGI) upsetting the labour market, a lot more realistic near-future (present) outcome, and a definite stepping-stone to AGI: as I look at these careless pensioners, their lived experience removes any dread I would have felt if I still thought being a more productive software engineer would somehow be the key to success and happiness in life.
No. Being a smarter software engineer, maybe. I was completely burnt out as a programmer, CTO, you name it. Both personally and as far as herding subordinates go. I gladly left the field in 2019.
In 2023, ChatGPT made me enjoy software development once again. It does make me antisocial, preferring machines to people, when it comes to interaction. At first sight. But those who might see it this way never had the dread of going to a software dev community (forum) and needing to ask the real people there a question. That experience would turn Mother Theresa a misanthrope.
Do I welcome our new AI overlords? Sure, as long as they are open source.
Keep it open source:
I'm not affiliated with this project, I'm a nobody contributor who signed up.
Those images are indescribable