Here’s a thing that comes up often enough in my conversations with AI/ML practitioners and even in some specialist YouTube videos I’ve seen, but in my experience seems to be news to people who aren’t deep in the weeds of this field: an artificial general intelligence (AGI) is an AI that has learned to code.
As a lifelong coder, it seems intuitive to me that an AGI is a machine that can act in the world by writing code at least as well as a human — intuitive because I’ve been a person who has built everything from toy games, to back-office finance platforms, to restaurant reservation management systems with code. Code feels infinitely flexible to me — like the kind of thing that, once you master it, can do anything worth doing.
But as someone who has been in and around the startup scene as a programmer, I find I’m skeptical of this programmer intuition. I have tried a few times to eat some tiny little pieces of the world with code — both unsuccessfully and successfully — and I’ve realized that different parts of the software-as-world-eating process matter in different ways. More importantly, I’m aware that the most critical parts of that process are fundamentally, philosophically not amenable to automation, no matter how sophisticated.
I don’t think I’m necessarily an AGI skeptic — in fact, I’ve grown less skeptical of late. Rather, I just don’t think that “code that writes better code that writes even better code...” is as obvious a route to AGI as is claimed. If I’m skeptical of anything, it’s of the singularity. But more on the “s”-word, later.
Here’s what’s in this newsletter:
IBM has just released a large dataset that aims to accelerate the development of an AI that can code.
It’s thought in some circles that the moment we create AI that can write a code to build an AI that’s better than itself, we will have essentially created a kind of god — it’ll be a far bigger deal than even humanity’s taming of fire or electricity. No one knows what comes after that.
The above point may be intuitive for programmers, but as a student (and critic) of “AI ethics” discourse, I do think the ethicists have pointed out something that poses a fundamental philosophical problem for this whole idea of recursively self-improving AIs. Specifically, the selection of an objective function in machine learning often forces the model builder to effectively take a side in some fundamentally irreconcilable conflict of legitimate values. If this is already the case for today’s limited models in narrow domains like image classification, imagine how much more true it is when the objective function has to express a view of what is “better” vs. “worse” in the domain of general intelligence.
To translate my philosophical objections into the practical language of coding and startups: as a startup programmer, I’ve found that the hardest, most expensive problems in software are all fundamentally social and human-centric. So I’m skeptical that a self-coding, self-improving AI will be anything more than a massively deflationary technology along the lines of Moore’s Law. Massively disruptive and important, to be sure, but not necessarily godlike, or even particularly “general.”
Last week, IBM launched CodeNet — a large dataset of code samples that aims to do for automated code recognition what ImageNet did for automated image recognition, i.e. whatever the programming equivalent is of miscategorizing black people as gorillas, or cropping out black faces, or turning a pixellated picture of Obama into a white guy... ok I kid, I kid!
But seriously, the idea is that researchers can use this specially curated dataset to teach an AI to generate computer code and that ultimately this AI can get so good at it that it can replace human programmers for some types of tasks.
IBM begins their announcement of this project with the famous Marc Andreessen quote: “Software is eating the world.” The rest follows on from that — software itself will one day write the software that eats the world.
Project CodeNet specifically can drive algorithmic innovation to extract this context with sequence-to-sequence models, just like what we have applied in human languages, to make a more significant dent in machine understanding of code as opposed to machine processing of code.
With code samples curated from open programming competitions over years, Project CodeNet is unique. It’s unique not just in its size and scale – but also in its high-quality metadata and annotations with a rich set of information, be it the code size, memory footprint, CPU run time, or status – which indicates acceptance or error types.
The initial successful applications of this work will be modest, with the AIs first taking over some of the more mundane tasks of human programmers. The AIs will start out by replacing the kind of Stack Overflow copypasta coding that junior devs are infamous for, or it will do the kind of mindless, low-skilled clerical work that even experienced devs often spend a decent amount of time on — e.g., copying and tweaking boilerplate and code samples from one part of the codebase to another, in order to adapt an existing solution to a new context.
But at some point, when it’s really good at solving real programming problems in novel ways — when it truly eats the software that is eating the world — something remarkable will happen. A kind of apotheosis will take place, and humanity will finally arrive at the singularity.
I said above that some folks think the following about AGI: if an AI ever learns to code an AI that is better than itself at coding AIs, then at that moment we will have created something like a god.
The idea here is that if a human can code an AI that can itself code an even better AI, then she can turn that that AI to the task of coding an AI that can code an even better AI. And then this new AI, which was coded by an AI, can code an even better AI. And we just keep repeating this iterative process until we get something that’s so far behind the capabilities of human cognition that it’s essentially a kind of deity.
Imagine an AI that can solve any problem you give it or answer any question. An AI that can design entirely new categories of materials or can postulate new theories of physics that enable us to rapid technological leaps.
A few critical points follow from the above:
All it takes to get from that very first step — the first AI that can code an even better AI — to “godlike” is some electricity and time. So that first step is the whole ballgame. Whoever gets there first, wins.
Whichever country or group gets there first will effortlessly dominate all the others by wielding the godlike powers of this AI.
That moment of first contact with the very first generation of AI-generating AI is the singularity — a kind of rupture in the continuity of human history that, once we pass its event horizon, there is no going back.
People in positions of great power really believe in this stuff. Leaders like Eric Schmidt and Ray Kurzweil are open in that belief, but privately there are many more believers out there in the upper echelons of technology than you might think.
There’s a kind of “Pascal’s Wager” quality to belief in the AGI-driven singularity. If it’s not true, then you just wasted some time chasing it (but probably not, since you probably made a lot of money along the way); but if it is true, then it is the most important true thing in the universe. Therefore, the smart play is to place your bets on it being true.
Now, if you’ve been reading my newsletters on the topic of AI ethics and fairness, then you can probably guess what’s coming next:
The fundamental weak point in this entire singularity picture is that it dodges the very question we’re currently fighting about as a society in every area, from sports to education to criminal justice to AI: what, precisely, is does “better” mean? More specifically for our purposes in this newsletter, what is a ‘better’ AI? Better how? For who? In what way? At what task?
Through my critical reading of the AI ethics literature, I’ve found that they repeatedly highlight a genuine problem in AI: we can’t agree on how to choose a “good” objective function for any given machine learning task, because this choice affects society in ways that are A) impossible to predict prior to deployment, and B) inevitably take a side in human conflicts over definitions of “fairness” and conceptions of “the good.”
(The objective function is the mathematical function that defines success and failure in the training of a machine learning model. If training an AI is essentially like playing a game of “Hot and Cold” over a multidimensional surface, the objective function is the player who’s yelling “you’re getting warmer!” or “you’re getting colder!”)
To select an objective function for any ML training task is to express your values, to impose a hierarchy, to make a set of tradeoffs, to suppress some things and elevate others, to enshrine in your code a specific vision of what is good and what is bad in the world. (I wrote a really long thing on this specific point.)
And to deploy a model trained via your objective function is to act in the world in ways that have consequences you cannot possibly foresee.
Even if we could predict with total certainty what a given machine learning model would do to the world when deployed to production, different groups of humans have irreconcilable differences over fundamental values, and our model is destined to take one side in that human conflict. The bigger and more widely used the model, the heavier its thumb presses on the scale. So whose side should it take?
For a concrete example of the challenge of choosing an objective function, see the middle section of this article, where I offer a simple toy problem that is nonetheless impossible to solve in some “objectively” fair manner.
So when you propose an AI that makes a better AI, you’re taking this unsolvable problem of selecting an objective function and making it even more complex and hard to reason about. How do you even begin to pick an objective function that will guide your training algorithm toward a “better general-purpose AI,” if we can’t even philosophically agree on the best objective function for, say, a credit evaluation model?
Now I’m going to repeat this entire section but translated into the language of startups and software development. Hopefully, by the time you get to the end of the next part, you’ll realize I’ve made the same point in a different way.
The hardest thing about software
What would it look like as a practical matter if code could write code? I have thoughts on this from my time in the coding trenches, and I don’t think the answer gets us as close to an AGI as is often assumed. The reason comes down to what is easy and what is hard in the realm of software.
Speaking from my own experience — first as a lone cowboy coder, then a software contractor on a team, then a startup CTO at a software company that I saw scale from three people to into the 30’s — I’ve come to understand the following:
All the really hard and expensive problems in software development (as a world-eating process) are people-related. Furthermore, not all the people-related problems can be factored out, even in theory by an AI of godlike intelligence and coding ability.
Let me make this more concrete. Off the top of my head, here are some things I thought about as a CTO when solving problems with technology and code:
I added the most value as a programmer in the part of the coding process where I was first helping the team to understand the business problem we were trying to solve, and then subsequently framing that problem in such a way that we could know when we had solved it with some code.
When selecting a technology to use in our stack, I was always first and foremost concerned with how cheap and easy it would be to hire good people who already knew how to work with it.
I wanted tech that was battle-tested & boring, where all the big problems had been uncovered and solved by someone else. I wanted to free-ride as much as possible, so I stayed far away from the (buggy) new shiny unless I had a specific, costly problem that only the cool new thing could solve.
I preferred to spend money on external services rather than rolling our own. If we really needed to bring it in-house, then we could do that when we absolutely had to (because of cost or other factors).
I preferred to spend money on better hardware rather than on programmer-led optimization efforts.
I obsessed over code quality, testing, and documentation. What I really wanted most was to be able to bring an intermediate coder into the codebase, and have them be productive with just a few hours of catch-up.
I never wanted to work with genius cowboy coder types who were always looking for a “real” coding challenge. I’d much rather hire a coder of slightly above-average intelligence/ability with top-notch practices and habits, than some cowboy genius type who could work miracles that nobody but him could follow.
If I had to pay for software genius, I wanted it to be either in the form of a contracting arrangement or via some third-party service or software I was paying for. In other words, managing geniuses should be someone else’s problem.
I never ever wanted to pay nerds to have fights over whether this or that tool was technically superior. Those nerdfights do have to happen, but they don’t have to happen on my dime. I would always go with the consensus pick, which I could get for free without paying anyone to argue about it.
I also thought a lot about timetables, and what kinds of coders could do what kinds of work in what timeframes. These timetables were always one part of the delicate dance of promises, deliverables, and overall perception management vis-a-vis customers and investors.
I could go on, but my point is that all of the above are human and social things. In one way or another, they’re all about what people have done or are doing, and about who’s paying (or, preferably, not paying) for different kinds of labor.
I guess I could sum up my CTO experience by breaking all the work I was involved in down into two buckets.
The first bucket is, things that I did not ever want to spend programmer productivity on if I could avoid by outsourcing it (either by using existing software or by renting human or cloud resources by the timeslice). It includes:
Predicting the future
In the second bucket are the two things I did actually want to spend programmer time on, in order of importance:
Framing valuable, business-critical problems in such a way that I and my team could solve them with code, and could know that we had solved them.
Writing code to implement and verify the aforementioned solution.
Doing the second item above poorly will break a company, but doing it really well will not give you a win — it’s just how you keep yourself from dying. Almost all the bullets in the first bullet list in this section fall under this heading, and while such activity constitutes most of the day-to-day work, succeeding at this stuff is just table stakes
Actually winning in the market is about how well you execute on that first item — identify a valuable problem, and then frame it in such a way that you can solve it and can know you’ve solved it. Indeed, the whole game of successfully eating the world with your software is to keep doing #1 above over and over again without running out of money.
Now let’s imagine that a code-writing AI enters the picture, and let’s say it’s a mega-genius. You’ve heard of 10X programmers? This AI is a 100X programmer.
Ok then, congrats on taking the “table stakes” part of “eating the world with software” off my plate! There are now a ton of programmers out of work. All kinds of social coding products and practices are now obsolete since software doesn’t need things like code review, task boards, or probably even issue queues. All those journalists who learned to code are now once again being booted from the company Slack and invited to an urgent all-hands meeting.
But someone still has to do the more important part, which is identifying and understanding the needs that are in the world, and then framing those needs in such a way that they can be verifiably addressed with software.
So unless your automated programmer understands humanity well enough to untangle our real-life, being-in-the-world-together needs and wants and problems better than we humans can, then it’s will not be doing any world-eating without heavy involvement from humans who can do the problem definition and translation steps for it.
I think this fundamental requirement for humans to be the ones framing the coding problems is probably there no matter how sophisticated the AI is — even if it understands human communication, and can reason about cause and effect. Indeed, if this ever stops being there — if the machine ever starts defining and solving its own machine-specific being-in-the-world problems, rather than the ones we feed it, then humanity probably faces something like a grey goo scenario.
“Godlike” is actually bad because the gods were not great
Ultimately, I think the biggest barrier to an AGI is not that we don’t yet know what consciousness is, or even what intelligence is, but that as a species we cannot even agree on how to separate “better” from “worse” outcomes in even the most contrived toy scenarios.
Probably the most likely implication of this foundational problem is that if we do build something that we decide to call an AGI, it will be truly “godlike” in the sense that it’s a super-powerful entity with a singular set of values, agenda, worldview, collection of motives, etc. Its intelligence will be “general” only the sense that, say, I, Jon Stokes, have a “general” intelligence. Yeah, I am fairly flexible in my thinking and problem-solving, but as anyone who knows me can testify, my flexibility has very real limits, and those limits regularly set me at odds with other humans.
So I suspect that for every issue on which humans differ, a godlike AGI will make its own idiosyncratic calls out of its own peculiar values. Most of us surely will not care for the results, and the AI will not care that we do not care for them. Again, we can’t just magically order it to care what all humans think, because then it would be paralyzed and worthless — it will definitely pick a side in every human conflict it touches, and it will definitely ignore the complaints of the losers. It cannot be otherwise.
This was the way of the ancient Greek gods, whose superhuman foibles were capably lampooned by Lucian of Samosata in his Dialogues of the Gods. Lucian’s gods had all of the faults that come with having individual personalities and motives, but paired with the power to reach down at any time to bend the course of the human and natural realms. And in Lucian’s telling, they made each other as miserable as they made their human subjects.