Be Not Afraid: Gods, Monsters, and Generative AI
This past June, Julie Fredrickson and I handed out AI whitepills to a conference full of farmers, academics, dissidents, and not a few Uncle Ted fans. We covered everything from alpha to omega.
What follows is a heavily edited version of a talk on AI given by
and I at the Doomer Optimism Campout 2025 in Story, Wyoming. We didn’t do notes or slides — on the advice of the conference organizers, we both just got up there and winged it. There was quite a bit of audience interaction, which made it fun for us and a lot more interesting (all the audience questions and comments were great).The original version of the talk is available via the “article voiceover” feature at the top of this page. I expect most people will listen to that instead of reading through the over 8,000 words below.
However, if you do want to read rather than listen, feel free to use the TOC below to move around and check out the topics that interest you. There is a unity to the talk and ensuing discussion, but we do hop around quite a bit, so you can consume either the text or the audio out of order, and it should still work.
The timestamps given below aren’t exact — exactitude isn’t possible given the degree of editing I did to make the text readable. But they should get you in the general vicinity of the corresponding audio.
🙏 Special thanks to Paul McNeil, Ashley Fitzgerald, and the rest of the conference organizers and staff for the invitation to participate and for showing us all such a great time at the campout. Thanks also to
for recording this and providing us with the audio.Table of Contents
Hacking the intentional stance [25:31]
Chatbots as therapists? [26:55]
Is AI “intelligent”? [28:46]
Embodiment and Moravec’s Paradox [30:23]
AI and tech gnosis [36:35]
The case for decentralized AI [41:08]
Capital, labor, dictators, pimps [47:21]
AI’s relationship to culture [56:52]
Can AI help us worship God? [01:05:47]
AI is a tool, not a god or the Apocalypse
🪽 Julie [2:04] — I come from a crunchy back to the land hippie family, and that's actually not unusual for technologists and Silicon Valley people. Surprisingly, the more you use the machine, the more you crave to be back in the world and in your body. So my early years were no screens.
If I wanted a computer, I had to build it myself. So highly recommend that as an approach if you are interested in children picking up these things. My actual career has only really ever been in tech because the great sort sends on the moderately mathematically inclined. University of Chicago, shout out for both of us because, you know, Midwest supremacy.
I've was a founder for multiple years, sold two companies. Now I have a small investment firm called Chaotic Capital, which is relatively self explanatory in terms of its name and how it relates to the project. I'm investing in the entire set of infrastructure of how we deal with the fact that this is coming. We don't have a choice in it.
And it's my belief that any tool born of man cannot be godhead. I don't think that's terribly controversial. Maybe Jon and I are class traitors in that there should be much skepticism of the tools that we bring into the world because you should understand how to use them before you decide whether or not they have a moral valence. And I think that's where we're gonna try to focus.
And hopefully, most of this will be question and answer after we do a little bit of intro on the basics because — news flash — it really is just math, and it's not even particularly complicated math.
I want everyone to come out of this feeling like: “Oh! If this is a tool that's relevant to my life, now I feel more confident using it.“
And though both of our spouses gave us guff for saying “immanentizing the eschaton,” it's really not the apocalypse. It's just another industrial revolution and probably not even as scary as the actual industrial revolution. Although, maybe the new pope has a different opinion.
Early neural nets and the surprise of scaling
👾 Jon [5:20] - I have a background in engineering and as Julie was saying, the math is behind all this stuff is fairly simple. I did an electrical engineering degree at LSU, where I took some classes on neural networks and machine vision. And we were doing neural networks back then in 1998. But these neural networks were kind of toys -- they weren’t that capable.
You could do character recognition with neural nets. They had some industrial application, but I don't think most people imagined that this mathematics and this basic architecture were ever going to yield anything truly interesting as far as human-level intelligence.
So everyone was looking for different routes to artificial general intelligence (AGI). And at some point much later, we realized through experimentation that the fundamental difference between the early toy things that I did in undergrad and what powers your phone now or, you know, ChatGPT, is scale. It's the same basic math done at impossibly large levels of scale.
Media revolutions, then and now
👾 Jon [6:22] — Now, a little bit more by way of bio, but it's relevant: After LSU, I did a Master of Divinity at Harvard Divinity, and right after that, I did a Master of Theology at that same school. Then I moved to the University of Chicago, where I did five years of a PhD that I eventually quit and didn’t finish. And after ten years of grad school, I went on to a career in media and tech. Now I'm the CTO and co-founder of an AI startup.
While I was studying Christian origins, mostly I was a historian — I wasn't a theologian. So while I was doing Christian origins at Divinity School and then for my PhD, I had a few different areas of focus over the years — and some of these are relevant to AI, as I’ll get to in a moment.
From scroll to book
👾 Jon [6:42] — One of the things I studied, which was trendy at the time, was “early Christianity as a media revolution.” At the time Christianity got its start, the scroll was the main portable media technology. But scrolls were these big, cumbersome things; they were expensive to produce.
Then later came the codex, or book, which was lower cost per bit. You could write on both sides of the page—the recto and the verso—so you got more information density. The book was portable, but it was kind of a disposable object.
A book was something that you took notes on or something that you threw away, whereas a scroll was a “real book.” And the early Christians adopted the codex for sacred scripture, which would’ve been really strange at the time. It'd be a bit like adopting the dollar-store comic book or graphic novel as a sacred scripture medium instead of, say, the leather-bound book.
So Christians adopted this really cheap, kind of nasty and inexpensive format, and this was a new media format, and they elevated it. They copied Paul's letters into it, and the gospels, and they were able to circulate this material and pass it around.
They were able to reproduce their books super inexpensively. Slaves and other people made copies. So the codex spread with Christianity, and Christianity spread the codex as a media technology.
Of course, by the medieval period, we get these really elaborate books that have the wide margins and the figures and the ornamentation and stuff like this. But all of this fancy stuff came later.
👉 My point: Early Christians intentionally adopted this media technology and used it in these novel ways. It's sort of an early version of making the machine work for you. And you see that the book was reconceptualized over the course of this centuries-long adoption period.
What I mean by “reconceptualized” is that, for we moderns, the book is often a bit elevated and now even antiquated. So our relationship to this media tech (i.e. the book) has changed from what it was when we first encountered it.
The telegraph and the television
〉Media revolutions didn’t stop with the book, of course. Or even with the printing press. I’m eventually going to suggest that the large language model (LLM) is yet another such media revolution, and our relationship to this technology will evolve. But first, let’s discuss some more recent media revolutions.
👾 Jon [8:47] — There was a New York Times article on ChatGPT-induced psychosis, and it went viral recently. I've given my opinion about that on Twitter, which you can read over there.
But there was a response to this article from fellow SubStacker
, who wrote a brief cultural history of the telegraph and the television. She covered how with the early telegraph, there were people who thought you could get on the end of a telegraph wire and receive signals from the cosmos.And of course, with television, you got the movie Poltergeist, where they're staring into the TV and the Antichrist comes out—he’s somehow in the TV static—so there’s this idea of communing with another world through the screen.
From Katherine’s article:
Spiritualist mediums claimed to receive messages from the afterlife through Morse code. These operators saw themselves as human receivers, bridging the material and astral. The technology that sent messages across continents without physical contact made it easy to imagine messages crossing the veil.
Radio seemed to throw every word into what Sconce calls an “etheric ocean,” a limitless and invisible sea where messages bobbed about like bottles adrift. By the late 1920s, the big broadcast companies tried to “net” that ocean with fixed frequencies and scheduling. Sconce writes about how fiction reflected this taming of the radio waves. The wistful romances of amateur “DXers”2 scanning the dial gave way to sinister tales of mass hypnosis, government mind-control rays, and Martians commandeering the airwaves.
Television, again, added another layer, perhaps most iconically portrayed in the 1982 film Poltergeist.
I think what people don't realize is that this kind of response to new media technologies— to technologies that make the absent and the invisible present in the here-and-now — is actually very old.
Reading as necromancy or sexual domination
👾 Jon [10:08] — In antiquity, some of the earliest writing that we have is from funerary monuments. The idea is that you would go into a graveyard, stand in front of this tomb, and read this inscription aloud.
🗣️ Important: One of the things that you should always remember when you're reading about or thinking about the past—even as recently as two hundred years ago—is that people read everything aloud. So text was a form of frozen speech.
Or, we might say that a written text was like a musical score. If you're really good at music, you can read a score silently in your head and imagine the notes.
But most of us aren't that good at it. When I read music, I have to play the tune on an instrument in order to understand how it sounds. So as I sight-read a piece of music, I'm hearing it for the first time when my audience is hearing it. I'm not hearing it in my head, I'm hearing it audibly. This is how reading went for most of human history.
When you would read a letter—when you would read anything, really—you would read it out loud. So reading had a social aspect to it. People would read books in the home, and as they read, other people would hear them in another room and they'd come in and listen.
So when you were in a graveyard a few thousand years ago and you read this funerary inscription, you’re reading the inscription aloud. These funerary inscriptions are all in the first person.

There's this really famous one from a young woman named Phrasiklea. I’m going to read her funeral monument inscription.
Tomb of Phrasikleia. Kore (maiden) I must be called evermore; instead of marriage, by the Gods this name became my fate.
This inscription is in the first person because when you stand there and you read this inscription, you're giving voice to this dead virgin.
There is a necromancy aspect to reading that we moderns totally don't have. Nobody who reads anything reads it out loud and thinks, “I’m doing some necromancy right now!” Right? That's not a thing nowadays.
But if you were, say, 2,500 or 3,000 years ago and you're in a graveyard and you read a tombstone, you are allowing your vocal cords and your mouth to be possessed by the voice of a dead person.
This is also a little bit about dominance. When you see a monument and it says, “I, Emperor So-and-so Fancy Pants, have conquered these people and I did all this...”, the idea is that you're standing there and you're reading this out loud, and this emperor has taken over your vocal cords.
Sometimes this takeover is figured explicitly as a sort of penetration. There's a dominance aspect to it, and some of the inscriptions could be a bit vulgar, because the idea is, “I have control of your throat now and I'm making you say the words.”
The point I’m making is that there were all these shades of meaning and cultural overtones to the technology of reading that we moderns no longer have. Our relationship to this media technology has fundamentally changed. When I'm reading text in a text editor, when I'm reading something silently, all of this necromancy and domination baggage is completely absent.
📚 Further reading:
Phrasikleia: An Anthropology of Reading in Ancient Greece. Jesper Svenbro.
Orality and Literacy: The Technologizing of the Word. Walter J. Ong
Singer of Tales. Albert Lord
Beyond the Written Word: Oral Aspects of Scripture in the History of Religion. William Graham
The Media Revolution of Early Christianity: An Essay on Eusebius's Ecclesiastical History. Doron Mendels
Large language models as a search process
👾 Jon [13:20] — I'm talking about this because LLMs for me are the same way. When I'm using a large language model, I'm involved in a search process. I am searching for a sequence in latent space.
I could be looking for a sequence of pixels that make up an image, or a sequence of letters that make up a text. The text I'm looking for could be computer code, a recipe, or a letter of introduction.
🎯 By prompting the model, I'm trying to navigate my way into a region of the model’s representation of the world that has a target sequence that I need.
In order to navigate to that region and locate that sequence, I feed the model sequences related to the target sequence I'm looking for. And it gives me back a sequence that's related to what I gave it.
Right now, we construct these input sequences mostly as a “chat history,” but that's all a trick. The chat thing is fake.
You put together this back-and-forth chat history exchange, send it through an API, and this stochastic process spits out a related sequence of tokens. Then the platform sends you back this text payload consisting of a target sequence that you found inside the higher-dimensional space shaped by the model’s training data. Finally, you—the human and the only actor in this whole multi-step process—interpret that sequence in some way that's useful to you.
So the idea that “AI” is a being speaking to you is mostly fake and of-the-moment. If you're relating to an LLM this way, to me that's going to be archaic soon. I think most people, probably in a couple of years, will be wise to the game and they’ll see LLM usage as a search process.
For more on gen AI as a search process:
The chat interface as primitive UI
🪽 Julie [17:27] — There's an author named
, who has a tiny novella that's free online called In the Beginning, There Was the Command Line. We've had many iterations of how we talk with computers.We had paper batch, which was just math, adding things up. It used to be humans before it was that. And then we moved to command-line interfaces. All of you are now interacting on your phones and computers with something called a graphical user interface (GUI). There's a metaphor you're engaging with, which is the desktop—like, you're at your desk and you pull up your files.
That's not actually what's happening, but it's easier for our minds.
And for whatever reason, the first interface we had with LLMs was a chatbot. But that's probably not the final form. We don't yet have the GUI—the graphical user interface—for artificial intelligence. And chat is definitely not it.
💬 Much more on chat here:
👾 Jon [18:59] — A foundation model is trained to complete a sequence. So you give it a sequence of text or a sequence of pixels, and it gives you back a sequence that's related or that seems to complete the sequence you provided.
So when you had an early foundation model like GPT-2 or GPT-3.5 that wasn't tuned for chat, you might give it a sequence like, "I ate something for breakfast," and then it doesn't know if it’s supposed to complete a recipe or a piece of dialogue from a book.
It doesn't know the context of that input. So the completions might go all over the place. It might complete it like, "and I ate something else for lunch." Or it might complete it like, “’I ate something for breakfast,’ said the user”—treating it as part of a dialogue.
You could take that foundation model and tune it so the sequences it gives back are always, say, Dickens novel completions. So no matter what you give it, it always tries to complete it as Charles Dickens in some particular book.
But at some point, it occurred to somebody that instead of having the completions be all over the map, what if you changed the shape of the probability manifold that you're bouncing the tokens off, so the completions you get back tend to look like chat dialogue? And then one could have the experience of chatting with another entity. And so it's a bit staged this way. It's Kabuki.
You tune the model so the person has the experience of chatting with another intelligence. And, of course, it's one that really loves you and thinks you're fantastic. I mean, everybody has seen the news—the chatbots are really high on all your great insights.
But this is all basically a show. We could have it tuned in different ways to produce different kinds of things, but we picked the chatbot.
So right now, we're sort of stuck in this chatbot paradigm, where we're relating to this search process that produces sequences as if I'm chatting with a person. But again, that's only one way to relate to this media technology. Just like “I am manifesting a dead voice” is only one way to relate to writing in funerary inscriptions in one context.
There are a lot of other ways we can relate to these things, and I do this daily for work. When I'm getting code back from Claude Code or from one of the Copilot models, I don't actually have the sensation that it's another programmer doing something. Instead, I have the sensation of playing a roguelike game.
For more on how using Claude Code is like computer gaming:
If any of you play Dead Cells or similar games, you're just steering your way through this randomly generated level, some of it's familiar, and you're trying to get artifacts back, and you're trying to collect things that help you level up. I'm trying to make an LLM do real work in pretty much this same way. So it's a different mindset that I bring to the interaction—definitely not like a conversation with another being.
Hacking the intentional stance
〉There was a question from a programmer in the audience at 23:13, and this person uses Cursor.ai for software development and still thinks it feels a bit like a chat UI. Julie responded with a discussion of Stack Overflow and how the models are trained on so many sites reflecting so many different personalities and people that you don’t know who you’re interacting with, which I’d encourage you to listen to in the audio.
In this edit, though, I cut over to my answer because I want to stay on the theme of how the chatbot is a clever trick designed to elicit a certain category of relational responses from the user.
👾 Jon [25:31] — Humans have a tendency to attribute agency and personhood—or to take what philosopher Daniel Dennett might call “the intentional stance”—towards lots of different chaotic systems. The weather is a classic example. I don't want to get into folk anthropologies of religion, because academically we've moved beyond this idea that the first religion was people worshiping the weather—but still, there's something there.
If something seems a bit random but capable of being influenced, like a slot machine (Las Vegas exploits this), you'll imagine you have control over it, or that it has some agency, or there's a plan you're trying to manipulate.
Chatbots exploit this human tendency. They're a hack that leverages our instinct to attribute agency to random or semi-random processes that have some memory.
But again: you're just searching for a sequence in the latent space of the model, and you're getting back a sequence similar to what you put in.
Chatbots as therapists?
👾 Jon [26:55] — Katherine Dee, who I mentioned earlier—I was on her call-in show, and somebody called in from a then in-progress Vibe Camp party with this wild story. He says, “I signed up for this free therapy chatbot, and it's trying to get me to divorce my disabled wife. It's really going hard on this. Like, it really wants me to divorce my disabled wife.”
So I talked this guy back through the chatbot interaction, and I'm like, “Look man, you constructed this chat history. All of this is your doing.”
You put text into a payload that you sent to a remote server.
Your text went into a system that produced a set of tokens—sequences related to what you sent it.
Then you got back this text payload containing language about how the user should divorce his disabled wife.
Then you fed that text back into the machine’s input, along with the rest of your chat history, and repeated the process.
You did this loop a number of times, and kept getting these text sequences about how the user should divorce his disabled wife.
My point to him was: You were the agent in this process. You assembled the initial chat prompt. You found a sequence in latent space. You responded to it. You added more text. You collaboratively built this chat history with the API, and now you're interpreting these sequences as, “the AI is saying to do this thing.”
But there is no “the AI”! There's just you! You’re the only character in this story, and you’re doing everything.
Is AI “intelligent”?
👾 Jon [28:46] — I think there may be something in the way we train LLMs—something in the neural networks—that is, at some level of abstraction, similar to what I do when I produce language.
Maybe when I speak or write, I'm going through a search process in my head. Maybe I'm searching through my own latent space to find the best sequence of words.
I don’t want to dismiss LLMs as entirely unrelated to human intelligence. My sense is that there are some connections. We might have modeled something that we as humans do, something we as intelligent beings do. Those similarities could become more significant in the future. There might be new architectures after or alongside autoregressive LLMs that better model other aspects of human cognition.
So I'm not completely dismissing LLMs by calling them just a search process. And I say that not because I don't know how AI works, but because I don't know how my brain works. Nobody in this room, I'd wager, has a fully developed theory of mind that’s been thoroughly tested, where they know exactly how consciousness works.
It should always be a red flag when someone confidently says, “AI is not really conscious,” or “AI is nothing like intelligence.” I always turn it around and ask, “What do you think intelligence is?” Intelligence is a highly contested term! Is it measurable? Does it even exist, or did we just make this word up to oppress people?
So unless you have a really locked-down theory of human intelligence, who are you to say there's nothing in an LLM resembling human intelligence? I don't know.
Anyway, there may be some crossover between LLMs and human intelligence, but I don't think that “it’s an intelligent being” should be the default way of orienting oneself to an LLM.
Embodiment and Moravec’s paradox
🪽 Julie [30:23] — Cognition requires a fair amount of power for both humans and machines, but embodiment is much broader. So as we look into autonomous robots, driving, things of that nature, those tasks are cognitively heavier. And there might be some intuition that what we're doing mentally is actually a relatively small part of the broader landscape of intelligence and intelligent systems—which involve more than just the mind. And that's probably even less solved.
👾 Jon [31:10] — Before we do questions, I'll leave you with one last thing to Google: Moravec's Paradox. This is a long-standing paradox in robotics where it's been easy—or at least achievable—for us to get computers to beat humans at chess.
We can do language, chess, Go, programming, and these higher-order tasks. But it’s extremely hard to get a robot to fold laundry. That’s surprisingly difficult due to physics—the way fabric hangs, different fabric types, seams, and so forth.
All the things your dog or cat does, or the squirrels outside—the lower brainstem stuff—that's actually the superintelligence. Those tasks are extremely challenging because they involve massive amounts of information processing in the environment. In contrast, the things we consider human—like making language, drawing pictures, and higher-order thinking—turned out to be easier. That was the first set of tasks to fall, which many people didn't expect. Science fiction certainly didn't suggest those would fall first.
Moravec's paradox says, essentially: “The tasks we consider animalistic or lower-brainstem activities actually require more processing power and more intelligence than the higher-order symbolic thinking we do.”
Symbol manipulation has turned out to be easier at human levels than raw information processing and environmental manipulation.
🪽 Julie [32:47] — That's the optimistic part. It's my contention we'll be able to return to human interpersonal relations thanks to what some of these tools enable, because that kind of interaction is the hard part. These tools might allow us to extend something we previously thought limited, which is actually quite vast.
Is AI ruining higher education, or saving it?
🎤 Audience Member [33:15] — I'm on staff at a college in Washington State, and in my work, I interface with our curriculum. Our instructors tell us they're no longer really teaching because about 90% of assignments they get back are AI-generated. When you take the student away from the AI, they're functionally illiterate. Getting together is nice, as long as there's some commonality.
But if AI destroys our ability to transmit culture meaningfully from person to person, the connections between humans will eventually fail, because there will be no commonality of learning, culture, or understanding.
🪽 Julie [34:04] — That's our fault, not the machine's fault. Truly. Yeah.
I actually find this a little amusing because Silicon Valley culture used to reward hacking. If you found a way to systematize a process and make it more efficient, you'd do that, because humans have limited energy. Students have correctly figured out that much of education is now simply pantomime. If they're not required to engage authentically, this is the result. But transmitting culture is entirely up to us. We can reinforce the idea that yes, there are always easier ways of doing things.
But if taking the path of least resistance is getting you exactly what you want, look around and decide whether you think the things resisting pressure are doing well.
👾 Jon [35:09] — I'd give a variant of Julie’s response. In my day job at Symbolic AI, I'm always trying to get state-of-the-art language models to produce text artifacts reflective of human steering, human insight, and human intelligence. You input your notes and your thoughts, and it's supposed to become, say, a news article or something. But even the latest models are very brittle. LLMs are fragile.
When you color outside the lines, the whole thing starts to collapse and fall apart. If this brittle, janky prototype technology has destroyed higher education as we know it—really? I'm trying to imagine Professor Tolkien's seminar back in the day, where you show up to read some medieval history and you bring this ChatGPT output. Why would you even do that?
In seminars I was in, reading Plutarch or whatever, there would’ve been no room for this. Maybe AI is a bit like a wildfire in the ecosystem that burns away some of the dead wood, fertilizes the ground, and allows something better—and perhaps older, closer to what it once was—to grow back.
👍 Note: It was suggested to me in a group chat that my answer above was a bit of a cop-out, so I responded with the following further elaboration (cleaned up here for clarity):
I have a pretty straightforward take on this—extremely basic, really:
If someone genuinely wants to learn, grow intellectually, and become educated, and you tell them clearly, “Using AI in these ways will help you toward that goal, and using it in these other ways will hinder you,” then they’ll generally choose the right path and avoid misusing the tools.
It's kind of like physical exercise: imagine someone wants to build muscle. You could have a machine lift the weights for you—which would obviously be pointless, even though, sure, the weights got lifted—or you could use the machine to provide resistance and actually get stronger.
But if someone is just trying to check a box, gain status, or please their parents or peers with the appearance of being educated, they'll do whatever they have to do—AI shortcuts included. My honest feeling about those folks is that I'd rather not waste educational resources on them.
Education should mainly be for those who truly want to learn. Right now, though, we ask education to be way too many things: it's a rite of passage, a daycare, a credential factory, a gatekeeper, and so on.
My view here is grounded in my experience raising my own girls. I’ve found that when they genuinely want to learn something—whether it's painting, Iceland (my oldest is obsessed... no clue why), algebra, pop-music lore—they naturally dive in with enthusiasm.
But if they have to do something just to check a box, we talk about whether there's actually any value in it. If we agree it's beneficial enough, they'll do it—but usually at around 70% effort, which is good enough. And if we decide it's pointless, we scheme up a shortcut to minimize their wasted effort so they can spend their time on something better.
My big question in these cases is always: what are you doing with the time you saved by taking that shortcut? Did you waste it, or did you put it to good use and end up ahead?
This exact question is going to apply to newsrooms adopting AI tools like mine. Once you gain back some of your time, what are you doing with it? Are you just churning out more content slop, or are you investing in better, deeper stories?
By the way: I don't assume that everyone's going to default to producing slop—the returns on that are diminishing anyway. In fact, there's a real chance we'll see better reporting emerge, because ultimately that's what audiences value and will pay for.
For more on the challenges of credentials and gatekeeping in the era of gen AI:
AI and tech gnosis
🎤 Audience Member [36:35] — I have a separate question. Why do we want to demystify AI—to remove this idea that it's an epiphany or maybe even a magical process? I’m really interested in some of the earlier thinking about cybernetics and AI that had a genuinely magical or mystical element, like technosis from, what's his name—Eric Davis?
The way you're describing interactions with an LLM sounds a lot like that kind of Gnostic recollection process, reconnecting us with the divine.
Could it be valuable for us culturally to think about it this way—not just as something delusional, but as a meaningful way forward?
🪽 Julie [37:43] — I love schizophrenia, at least as an extremely online person. I'm fascinated by the process. If you feed garbage into the system, what do you get out of it? Something insane—you definitely still get Twitter.
But maybe that's just my inner Protestant speaking, skeptical about gnosis as potentially a mirrored illusion of my own desire. That might not hold true for everyone, though. And this is why I brought up probabilistic versus deterministic: It could be that in your thought process, what was once a one-to-one relationship is now one-to-many. Maybe we'll find something in that which genuinely transforms our relationship to the divine.
But I'm somewhat skeptical because, as Thucydides suggests—and I swear I'm wrapping up—human nature doesn't really change much. We’re still embodied beings. History evolves, but humans remain fundamentally the same.
🎤 Audience Member [39:16] — The person interacting with AI is, as you said, bringing themselves to the table. But these LLMs are also trained on massive amounts of human-generated data—some bad, some great. Some of it includes classic literature, works universally considered valuable and enduring.
So, yes, that person interacting with AI may trigger certain responses, but the responses remain surprising because they reflect more humanity than any single individual could ever provide.
I think what AI truly offers is a mirror turned toward humanity, reflecting us back to ourselves in all our beauty and horror. There are amazing possibilities, but also terrifying ones. Nobody really knows what they'll get from this process.
I'm fascinated by your earlier comment about gnosis and possibly returning to a more oracular culture, engaging through speech acts rather than merely remembering and writing things down.
👾 Jon [40:44] — I completely agree with your description of LLMs as a mirror—though more of a funhouse mirror, since the reflection has a distorted shape. But I like the way you put it.
You know, there are beautiful areas and horrifying areas. You can wander into a demonic corner of latent space and end up producing chaotic, disturbing artifacts. But you can also explore nicer regions where you'll find sequences—images or text—that are uplifting and meaningful.
The case for decentralized AI
👾 Jon [41:08] — Every model goes through a post-training phase where it's imbued with the values of the people performing that step. Your search process is influenced by this post-training, so you'll tend to get outputs aligned with whatever values were emphasized during that phase.
Then the critical question becomes, as
would say, “Who is catechizing the bots?” Who post-trained the model? Because whether your output sequence feels angelic, demonic, or something in between entirely depends on the people who guided the post-training and how they shaped it.🪽 Julie [42:14] — And keep in mind, we're among the people currently doing that work—and probably the least weird subgroup of them, honestly. This is why I’m urging all of you to recognize that you bear responsibility for these outcomes.
In Montana, we passed a kind of “Right to Compute” act—essentially a freedom-to-compute measure—because math is fundamentally a subset of language. It's crucial that all of you contribute, because the freedom to weigh in, engage, and train open-source models to reflect your values is, I believe, a moral imperative. Otherwise, it’ll just be wacky people from San Francisco—many of whom I love—but maybe not all of them.
👾 Jon [43:16] — This is a crucial point. Regardless of how you feel about the Bad Orange Man, we were at a crossroads not long ago. One faction believed there should only be five or so AIs in the world, each tightly controlled by Google, Meta, Apple—Big Tech, essentially—and that all AI use should flow exclusively through their servers and reflect their post-training.
Now, the opposing faction that rode Trump's coattails into policy influence is in favor of decentralization. That means widespread access to model weights, so every community can perform its own post-training. Anyone can shape the probability manifold to reflect their community’s values, sacred texts, or cultural touchstones.
I believe this is a much better world than one in which only four or five giant corporations control everything.
For more on the fight over decentralized AI:
🪽 Julie [44:14] — Right. Otherwise, it would’ve been Elon, Sam Altman—not Eric Schmidt anymore—and Satya Nadella. There would be maybe five people deciding values for all of humanity. Trust me: none of us want that. It's a narrow and limiting vision. This decentralized approach is definitely a better outcome.
The greatest danger posed by LLMs
🎤Audience Member [44:35] — Exactly. Those same five people created the social media environment, and it broke our collective brains. This new technology is even more compelling, especially for the people you mentioned earlier—like the guy being encouraged to divorce his disabled wife, or someone desperate to fall in love.
How do we avoid the same broken-brain outcome social media gave us with an even more powerful technology—one that could addict even more people who lack the cognitive or emotional resilience to resist it?
👾 Jon [45:08] — Look, I'm not suggesting this is all roses. The biggest danger with this technology is that it shows you exactly what you want to see. That's its greatest risk.
It's post-trained specifically to please you—to give you a pleasant feeling from its output. When you engage in that search process, prompting the model, the sequence it returns is carefully crafted to make you hit "thumbs up" on whatever rating system you're using. You're meant to feel satisfied, understood, and validated by its responses.
And your positive feedback loops back into its training data, reinforcing this cycle. The whole system is designed to make you feel seen, heard, and perfectly catered to. It's always going to show you what you want to hear.
🪽 Julie [46:01] — But you can also tell it not to do that.
👾 Jon [46:03] — You can, but you have to be extremely intentional about it.
If you choose to cocoon yourself in a weird, personalized, funhouse fantasy—something like a holodeck experience tuned exclusively to your preferences—it's essentially schizophrenia in your pocket. You're carrying around your own customized mental breakdown.
🪽 Julie [46:25] — Yeah, if religion was the opiate of the masses in a previous era, AI is definitely fentanyl. It will harm people. I don't think there's any way around that.
👾 Jon [46:37] — My wife used to work in a mental hospital in Chicago, and she told me a story I often think about. There was a guy who would stand in the lobby in front of a TV, pointing at it, insisting, "That's me! They're talking about me. They're all talking about me." He was totally caught up in this delusion that the people on TV were addressing him directly.
I think constantly about how AI makes it possible for anyone to become that guy. You could easily think, "It's talking about me. It sees me. It knows exactly what I want. It's showing me exactly what I asked for."
The AI will absolutely do that unless you're deliberate about preventing it. You can intentionally set the AI to critique or "red-team" its own outputs, but that's definitely not what's being optimized for right now.
Capital, labor, dictators, pimps
🎤 Audience Member [47:21] — I'm curious how you two see yourselves in all this. I've read commentary online describing people who create AI as unelected dictators, pimps, or drug dealers. How do you perceive your own roles?
🪽Julie [47:43] — Well, in my case, I'm the capital. I'm the person who makes the early investment decisions—the very first step. Unfortunately, I’m not invested in Jon because his startup is already too far along. But typically, I’ll find someone who’s at an early, completely unformed stage and say, "If you have something meaningful you want to build, here's a small check. Later on, we’ll find more capital and help you scale."
🎤Audience Member [48:15] — So then, what is Jon to you? Is he capital, an unelected dictator, a pimp, or what?
🪽 Julie [48:23] — I think catastrophizing is a deeply human instinct. But if there really are people trying to exploit or "pimp out" this technology, they're probably not the people building the foundational layers. Those abstraction layers are closer to utilities than they are to end-user applications. My investments tend to be at the application level.
I'm investing in things like nuclear power, infrastructure, and databases—because controlling those foundational layers gives us a chance to sidestep giants like Google and Microsoft. Honestly, I've been fighting Microsoft my whole life; it’s a personal passion of mine to slay that dragon.
Google has grown powerful too, but anyone who’s been around tech for a long time has strong opinions about how these behemoths came to dominate. They sit atop a capital structure that we have limited leverage over—but more leverage than most people realize. Utilities make money from usage, but they don't necessarily get to choose how we use their tools—especially if we shape the rules in law or in code.
So, Jon, you’re the good guy here. I think you’re clearly the good guy because your goal is to build human tools that meaningfully extend human capabilities. You’re a very humane person—I trust you to build something that's genuinely in our best interest.
👾 Jon [50:08] — Thanks—I appreciate that. I'd love to hear more options from our original questioner, though. We’ve had dictator and pimp; now I want to hear the optimistic version of who we might be!
Two options for AI’s impact on newsrooms
👾 Jon [50:08] — I'll tell you how I see myself.
My co-founder Devin Wenig and I are people with deep expertise in a specific industrial process—news production. News production is highly structured, especially at enterprise scale for large newsrooms. A piece of content typically moves through multiple phases, touched by many different hands along the way.
We're basically graybeards (literally!) in a particular industry that has accumulated a lot of inefficiencies. So we're applying this new technology to reduce those inefficiencies in a phased industrial workflow, resulting in an industrial product that people consume as news.
Now, there's an ethical aspect to all this—similar to debates around industrial farming: Is it good? Is it nutritious? I guess I'm implicated in that.
Right now, much of what gets published as news comes from reporters juggling a dozen tabs at once, repackaging existing information into content that's mostly designed to get clicks.
When you introduce AI into this scenario, it can play out two different ways, and everyone here probably knows what they are.
My hope is that it leads to something like, "I've reclaimed some time as a reporter. I can pick up the phone and call a source, or write something deeper, longer, and more meaningful." That's one possibility.
The other possibility is, "Well, now you've got extra time, so crank out 80 more pieces of the same shallow content."
💫 Which direction newsrooms choose will be their responsibility.
What my startup aims to do is give every journalist more productivity per unit of time—whether they're processing municipal bond reports, covering earnings season, or similar repetitive tasks. Ideally, newsroom editors will then encourage journalists to use the reclaimed time for deeper reporting: calling sources, traveling to do on-the-ground reporting, and producing higher-quality journalism. Hopefully they don't just say, "Great, now we can lay off half the newsroom and push the remaining staff even harder."
🪽 Julie [53:07] — Because the point is letting people focus on the work they actually set out to do. Journalists don’t want to spend their days rewriting press releases—they want to talk directly to the people involved in a story. Doctors became doctors because they want to heal patients; having information more accessible allows them to spend more time on actual care.
But if you don’t engage meaningfully in these core processes, you're ultimately going to become obsolete. That's the reality of industrial relationships.
For more on AI and news:
The question AI confronts us with
🎤 Audience Member [53:36] — What do you two think about the idea that this technology is being forced onto people? Like, I’m trying to write an email, and suddenly it’s prompting, "Let AI finish it for you." I didn’t ask for that.
It’s like the Amish—could we just draw a line and say, "I won't go beyond that"? And if kids grow up with AI completing their emails for them, maybe they'll never develop the discernment to write on their own.
🪽 Julie [54:05] — I’ll give you my revealed preference: growing up, I wasn’t allowed to watch television—no TV at all. If I wanted something, I had to build it myself. That was the Silicon Valley viewpoint back in the eighties.
Don’t listen to what we say; watch what we actually do. That’s always the key.
Personally, I don't use algorithm-driven media. I prefer to choose things myself. I don’t use Instagram or TikTok at all, and even on Twitter, I manually select what I engage with.
I just don’t want someone else making those decisions for me. Everyone actually has that choice. You can frictionlessly slide into whatever Elon Musk or anyone else decides to show you, or you can say, “No, I’ll build my own experience.” But that might mean avoiding certain tools altogether.
👾 Jon [54:57] — Personally, I’m happy that autocomplete for email exists. If my kid has to write some goofy templated email—like a formal apology for being late to a class they don’t care about—great, hit autocomplete, tweak the results, and be done.
But then I’m always going to ask them: “What did you do with the time you saved?”
Because let’s be real: no child a hundred years ago had to waste time writing pointless emails. So now that you’ve reclaimed that lost time, how did you spend it?
We’re an AI-friendly household, obviously. My kids have full access to ChatGPT, image-generation tools, all of that stuff. But they don’t use it much—they don’t care. They’d rather draw, write their own stories, read each other’s stories out loud, and proudly show us things they’ve created themselves. Why would they replace that with ChatGPT?
As their parents, we appreciate their original creations, and they appreciate each other's work too. Those creations become part of our family culture—not labor, but something meaningful.
If someone’s stuck doing repetitive, low-value labor—especially something mundane like certain kinds of emails—please, press a button, automate it, and then use the time you save for something meaningful. That’s my real goal.
I definitely don’t want my kids to cheat, but I also don’t want them wasting their time. A lot of our educational system currently trains kids to waste time. So if AI can help them avoid that, that's genuinely valuable.
AI’s relationship to culture
🎤 Audience Member [56:52] — I want to go back to what you said about AI giving us exactly what we want or telling us what we want to hear. I completely agree; I think that's accurate. I'd even go further and say that even when you specifically ask it, "Don't tell me what I want to hear," you're still, in a sense, getting what you want—just in a different, maybe more subtle way.
There's a prevalent theory of culture—Phil Grieve, Matthew Crawford, and others talk about it—that culture itself functions as a human way of disciplining ourselves, or perhaps limiting ourselves. Isn't AI, as you've described it, fundamentally anti-culture?
👾 Jon [58:01] — Yeah, I can definitely think of other examples that might also qualify as anti-culture. But ultimately, I think it will be whatever we choose to make of it. We have to actively decide how we're going to introduce AI into our lives, and how we're going to interact with it.
Luckily, we dodged a bullet with the centralized-versus-decentralized AI debate. Because we have open model weights and decentralized tools—which almost got banned—we now have leverage and an opportunity to steer this technology. We have a window right now to choose how we adopt and guide its use.
🪽 Julie [58:31] — Because we've always been drivers and creators of culture. I didn’t specify this earlier, but the startups I was involved in (partly because I'm a woman) were mostly in retail e-commerce: cosmetics, beauty, clothing—partly because I love it, and partly because that's what can get funded.
If you look historically, like at the German pursuit of pigmentation: the ideal version of blue in the mind’s eye, versus what we could actually produce chemically, had always been divergent. The gap between what we imagine and what we can create—that gap, I think, is culture.
In fashion especially, there's always been the question of who owns culture, who profits from it, and who participates. You could adopt a Girardian perspective—monkey see, monkey do—and maybe that’s accurate to an extent. But what I see now is a widening horizon of creative possibilities.
Before we had those fantastic German pencils with precise colors and Pantone standards, we couldn't visually express certain concepts at all. Similarly, AI may enable us to see and explore aspects of culture that are currently beyond our ken right now.
AI isn’t a god, it just has a lot of data
🎤 Audience Member [59:59] — Zuckerberg is sitting on an enormous amount of our private data. Imagine your child has been online since age five or six, and Facebook knows about everything from their childhood—like the times they got beat up at school. If I met someone as an adult who knew that much about me, I'd probably think they were some kind of god. Are we really comfortable having all that data collected and used to train AI?
👾 Jon [01:02:22] — I’d like to take a crack at this, because you're right.
Facebook will have an epic amount of context on you from years of your DMs and posts. In generative AI, there’s a saying that always shows up on slides: "Context is king."
In fact, I consider myself a context engineer—I practice context engineering.

My entire job revolves around deliberately constructing the token window that I feed to an LLM to get the right inference for the user.
So when my kids interact with a social media platform that has twenty years of their messages and chat history, the tokens they'll get back will be incredibly context-rich. They'll get responses so personalized, so targeted, they'll feel like the entity behind it knows them better than they know themselves. The sequences coming back from the model will feel eerily accurate and personal.
My only hope is that my kids will have spent enough time working with these tools and prompting them that they'll understand how the game works.
For more on the power of context for LLMs:
A friend of mine recently posted an AI-generated image of herself on X, and I could immediately tell she had fed it a ton of context. Everything in the image was perfectly aligned with her. As a context engineer, my first thought wasn't "this AI is a god." It was, "Wow, someone gave this model extremely good context."
When you see output that's dead-on, you immediately recognize it as the result of great input context—nothing supernatural. You realize, “the input sequence navigated the model to exactly the right region of latent space. That's why this result is so accurate."
So I’m pretty confident my kids won’t feel they're talking to a god. They'll probably think, "Wow, there’s an incredible amount of my personal context here. Actually, it’s a bit scary how much data this platform has about me. Maybe it knows too much."
🪽 Julie [01:04:40] — Right. You're essentially writing yourself into the Akashic records of the internet. That’s exactly why I produce so much content directly on my own sites, carefully tagging and weighting it. I know that shaping this record is a long-term effort, and I take it very seriously because I recognize my own power here.
I think most younger people grasp this intuitively, which is partly why cheating or using shortcuts doesn't seem like a huge deal to them. They understand they're already empowered in this dynamic.
Can AI help us worship God?
🎤 Audience Member [01:05:47] — Alright, this is more of a wrap-up thought than a question. Forgive my tardiness and maybe a bit of ignorance, but I heard a few things tonight that really stood out.
You used a lot of explicitly religious language—words like catechism, proselytize, and so on. And I came in right around the part where you were talking about cognition and what it means to be human.
Someone mentioned an idea—I can't remember who exactly—but they said something along the lines of, our truest human faculty isn’t reason or will, but rather worship. That we are worshiping beings, and that’s central to who we are.
So here's my question, especially since the theme tonight is optimism, which carries moral and emotional weight: What is the place of AI in relationship to that worshipful aspect of our humanity?
Can this technology augment that part of us? Or is it purely cognitive, purely mental—without access to heart or soul?
🪽 Julie [01:07:03] — That depends on whether you are using it for a worshipful act.
🎤 Audience Member [01:07:06] — But is it capable of participating in that?
🪽 Julie [01:07:08] — Are you capable?
🎤 Audience Member [01:07:10] — I’d like to think I have vertical faculty…
🪽 Julie [01:07:14] — I’m a Calvinist, so I definitely don’t know.
🎤 Jon [01:07:18] — Well, I’m a Pentecostal, so I’ll give a Pentecostal take on this.
Imagine a worship leader, ten minutes before service,, and the spirit has moved on this person, he and is like, “man, I have come up with the best jam for worship service, and I just wrote this.”
And he types it into Suno, and AI generates an entire praise track. And he starts handing out sheets. And so then they get up there, they're rocking out, and it's a really good service with this track that he used AI to help make. He made the track and to maybe filled out a verse or two right before the service started.
So if we're going to make up stories about how AI could possibly be used to do a thing, that's a technological thing you could do right now with one of the current apps.
So I think AI has that kind of possibility, but it's just limited by our creativity. How are we going to use it, how are we going to explore this?
↺ To circle all the way back, it's like the Christians adopting the codex, you know, they adopted this early technology, where they used this cast-off media technology that was for notes and it was disposable. But they elevated it to the sacred and they found all kinds of interesting and novel ways to use it. I hope that we do this with large language models.
Your comment about "dominance" as it pertains to reading the emperors words.....took me immediately back to gradschool (Lit Crit) and Derrida. Language, "the Word" - as The Pater (the origin of meaning). Whew.