I Say This Unironically: Our Society Is Not Prepared For This Much Awesome
The AIs are just going to keep doing this, but it's not all bad.
The story so far: We’re not even a year into the generative AI revolution, which I’d date with the launch of StableDiffusion in August 2022, and already so many parts of our culture are rekt.
👀 A very brief, random look at a tiny bit of the damage, as of this morning:
Outlets that publish short fiction are now flooded with AI-generated submissions to the point that they’re doubting whether the idea of such outlets is even viable now. The main complaint doesn’t seem to be that this stuff is terrible (though much of it is, no doubt), just that it’s not made by humans.
ArtStation and related venues are flooded with AI-generated submissions and are trying to enforce some sort of tagging policy so they can separate what’s human-made. Again, it’s not bad — it’s just not human.
College admissions offices bracing for a wave of AI-generated personal essays. Same story… not bad, just not human.
College professors are getting AI-generated work in response to assignments. Yet again, more stuff getting rekt by sudden, non-human awesomeness… Or, well, maybe better-than-averageness, but soon-to-be awesomeness when the next wave of LLMs drops.
K-12 school teachers are getting (presumably awesome) AI-generated work from students so some districts are trying the head-in-the-sand approach. They want to hide from the non-human awesome.
I just pulled the above out of my TL and my inbox from this morning. I’m sure readers could supply many more examples from their own inboxes and feeds.
jonstokes.com is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
There’s a consistent theme in all of the above examples, but I’ll return to that in the next section. Before I get to that, here are some Monday thoughts on all of these issues, in no particular order:
🙈 Automatic detection of AI output does not and will not work. It doesn’t even work in theory, much less in practice. There’s probably a whole article’s worth of explaining to do about why this is the case, but really here’s the only thing you need to know: those who train new models would love to know what’s human-generated and what isn’t, so they can train their new models only on the human stuff, but they haven’t figure out a solution for this, and they’re the experts.
🔩🔜🚀 Education is screwed in the short term but will be supercharged in the medium to long term. Everyone knows that as of just a few short months ago, the written work products that form the foundation of our education system are now not reflecting the student capabilities, learning, and talents they were thought to be reflecting prior to ChatGPT. But what most do not yet know is that these tools have the ability to dramatically improve the educational experience for both instructors and students.
Again, there’s a whole post in these points, so I just drop these three predictions below by way of overview (and in the hopes of revisiting all this later):
1️⃣ Prediction: Students who know how to co-write with AI and do fact-checking can produce better work and gain a deeper understanding of the subject matter than students who don’t use any of the new tools.
1️⃣ Context: Professors and high school teachers are anecdotally finding this to be the case, and I think it’s going to bear out. There’s also now a paper on it that’s promising.
2️⃣ Prediction: Students who do mastery learning with interactive AI chatbots will dramatically outstrip those who don’t by every educational metric.
2️⃣ Context: I heard a great Clubhouse discussion (back when that was a thing) between Marc Andreessen and a charter school startup founder where they were describing how master learning done in a setting of one teacher per student has been the only educational intervention in the past 100 years shown to definitively and dramatically improve student outcomes. The obvious problem with this is that a 1:1 student-teacher ratio does not scale, but with AI it actually can scale.
3️⃣ Prediction: Students who want to learn will still learn. Students who are just there to collect a credential with minimal effort will find the credential is now worthless, and the only thing of value at school is learning for learning’s sake. So the short version is: learning is not only fine but will get way better, but credentialing is in big near-term trouble.
3️⃣ Context: See my points below about access control and gatekeeping. Credentialing apparatuses where the input is (ostensibly) student-written papers and the output are certifications are now done-for. We’ll have to figure something else out for credentialing, but that’s a separate problem from learning.
🔐 Across every part of our society, access control just got a lot harder.
Any security that relies on voice authentication will fall to high-quality, open-source text-to-speech models like Whisper.
Gatekeeping strategies that rely on the evaluation of written output are already falling to ChatGPT. This is true for all applications and essays that are produced solely for the purpose of being evaluated by someone else for determining access to a scarce resource.
Computer-assisted coding tools are already good at finding security exploits and will get better, soon.
So in every place where we’re using voice, video, and the written word to gate-keep in some form or fashion, we’ll need to rethink that immediately.
Note that this doesn’t mean we can no longer do access control — it just means we have to rethink it for a world any and all computer-mediated experiences can be convincingly generated by AI.
👍 👎 Following on my point about access control, cultural objects will either be good enough to pay attention to or will be lame enough to be ignored — whether they were made by a person or an AI (or a person with the help of an AI) won’t matter. This is because trying to gatekeep publishing and art venues so that only human-authored work can be published is a mug’s game, and when everyone realizes that they’ll quit trying.
Seriously, the link up top where the guy is flipping out about AI-generated content in his short fiction journal? …Who cares? If the story is good then I want to read it. If it’s bad then I don’t want to read it. Maybe just decide if it’s good or bad, and if the problem is that submission rates are too high, then charge people for submissions.
The big fear: everyone will suddenly level up
💪 Almost all of the fears around AI that are circulating right now — fears of cheating, of “disinformation,” of scamming and spamming, etc. — all actually boil down to one thing: fear about what happens when a whole bunch of people, some of whom are stupid and/or irresponsible and/or malicious, instantly level up and get really good at making cultural objects.
Creators are worried about competition from untalented normies who didn’t earn the ability to produce quality work and therefore aren’t that invested in the work itself or in any sort of guild, discipline, community, or other social institution that’s connected to creative work.
Disinfo experts are worried that normies who didn’t go to Georgetown or wherever and who don’t share their goals and values will be as good as they are at writing persuasive political arguments that move citizens and decision-makers. And yeah, they’re also worried about Russian troll farms and such, but mainly I think they want to preserve an elite monopoly on persuasion.
Teachers are worried about kids being really good at churning out essays — again, without earning it by putting in the work. They’re also threatened by the idea that chatbots might soon be better at teaching kids than they are.
Again, all of this amounts to a fear that randos will suddenly get creative superpowers that put them on par with the vetted, the certified, the bona fide, the trained, and the institutionalized.
It’s kind of wild to me that so many people are deathly afraid that the masses will suddenly get really good at making high-quality arguments, short stories, paintings, photographs, movies, etc. You’d think we’d all be excited about the impending flood of high-quality everything. Yet here we are flipping out over the prospect of unprecedented cultural abundance.
I made similar points in a way that’s worth quoting in full in an earlier article:
Here’s my best summary of the dilemma that generative AI presents to our culture:
Growing numbers of people are now using AI tools to generate high-quality, original artwork that they have vanishingly little ego, time, or money invested in. In fact, in most cases, the total investment in any of those three things is essentially zero.
Because these creators have almost nothing invested in the creation of the art, they aren’t expecting much in the way of ROI that might normally flow from getting real creator credit on a work.
But all of our society's infrastructure around artwork — norms, laws, language, expectations — is exclusively geared towards a world where the creation of high-quality art is an investment that should yield some payoff for the creator(s).
The bottom line: For the first time in human history, we’re about to reach a point where the vast majority of people who are personally (and in many cases solely) responsible for producing incredibly high-quality cultural objects do not care about getting credit for the creation of those objects and are not expecting any of the normal benefits — accolades, money, access to certain social circles, patronage, etc. — that have historically accrued to accomplished creatives.
⛓️ I think the main part of this picture that it’s reasonable to be afraid of is the temporary breaking of the age-old link between effort and reward. But I don’t think such a break will really be all that long-lasting, because rewards for creation will simply get re-priced to match the newly reduced threshold of effort.
Yes, this means massive deflation, which is the true long-term worry here. And of course, the short-term worry is the turmoil that arises during the repricing and adjustment period.
The lightning storm (of awesome)
💣 Scroll back up to my bullet list of “stuff that is rekt right now by unexpected awesomeness” and think about how quickly and unexpectedly these core parts of the way we do things as a civilization have been rendered non-viable.
All this stuff didn’t happen over the course of a few years — it happened over the course of a few months.
Literally, we went into the year 2022 living in a world where written exams from students at every level of education were about the same (as a measure of learning) as they have been for millennia, and we went into 2023 in a world where that was suddenly no longer true.
And when the Times Square ball dropped for 2022, artistic images were still mostly produced by human beings, whereas I’m guessing that at this point in 2023, most of the artistic images that are getting made right now (by volume) are the product of AI.
🌩️ This isn’t going to let up, either. We’re in a lightning storm, where no matter what tree you try to shelter under, it may get hit suddenly from out of the blue; and the storm is only getting more intense.
So we have to rewire critical parts of our society right now so that they can start functioning again in this new context. Because if we don’t figure this stuff out ASAP, here’s what the future looks like:
In a few months, you’ll get robbed when your local banker, whom you know by name, gets a call from an offshore scam operation, and it sounds like you but it isn’t you.
In a few years, you’ll get mutilated or killed by some incompetent who faked his way into and through med school via heavy reliance on LLMs.
Next year, your kids will start to fall dramatically behind — like way behind — the kids in a different school where there are a few teachers who happen to be really good at using ChatGPT in the classroom.
Later this year, if you’ve trained as an artist, essayist, or fiction writer, but you don’t have a parasocial following — a fandom who’s there for your work and that wants to connect with you specifically — then you should consider learning a trade because the coming round of LLMs will be better than you at whatever you’re good at.
Unexpected, AI-powered awesomeness isn’t all upside, because it’s happening too fast and breaking too many parts of our civilization that were invisibly dependent on the uneven distribution of awesomeness in the human population. So we have to figure out how to have a society where the awesomeness gradient is now a uniform distribution, and we’re already out of time.
jonstokes.com is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Excellent summation. I wonder if the buzz is similar to when printing presses were giving books to the masses. No more gate keepers, this really is the equity that the moralists have been talking about, right? I’m an artist and realize I have to sell myself now, not my art. My arts the vehicle to me now. Until I can be simulated of course.
So exciting to be living through this moment. I imagine it’s going to be much worse, and much better than we can predict. Human thinking be damned.
Your last paragraph is legitimately a great description of not only this moment but any tech disruptions throughout human history, and I think it'll be the description that'll be lodged in the ol' noggin now.
Gary North (and partly James B Jordan) had a theory of economic development that progressed through three phases (which they connected to the Trinity and various Biblical triads) which he applied to stages of revolution (e.g. education revolution: https://www.garynorth.com/public/18126.cfm). The stages were: (1) the oligarchic, (2) the democratic, and (3) the individualistic.
*Note: As North points out, "we lose some conceptual accuracy by transferring concepts from one discipline to another, but when no readily recognized terms exist in one discipline, imports sometimes help", so we need to remember we're dealing with analogies.*
First, we have the oligarchic stage, where the market is narrow, there's a huge disparity in the quality of goods, and the producers coalesce around guild-like institutions. Second, we have the democratic phase where new tech not only decreases the cost of making goods but also decreases the cost of distribution so that the market expands greatly, quality has a more spread out distribution, and the guilds lose out to those who can distribute cheap goods to larger swaths. It would seem like at this point that quality drops, but that's only taking into account how the rich/oligarchs see things. For the poorer folks who didn't have access to anything in the first stage, their quality goes from zero to one. And third, we have the individualistic stage, where the mass production techniques (surprisingly enough) ignite a massive knowledge curve which the competitors happily ride. Diversification begins to ramp up and the end result is somehow a synthesis of our former guild-like situation but now offered to more people.
In Balaji's terminology, it's a centralization to decentralization to re-centralization (but if you squint your eyes you can also interpret it as a decentralization to centralization to re-decentralization). The key point is that is always a bumpy ride and always surprising what pops out at the end.