7 Comments

I was bitterly disappointed to find that ChatGPT was trained on Western interpretations of China, virtually all of which turned out to be wrong, and almost diametrically different from the lived experience of Chinese people, who think their government is tops.

So the best I can coax from it is a New Yorker-style essay about one or another aspect of perennially Bad China.

Expand full comment

"What are the biggest environmental hazards facing our planet today?" Well, let's ask Sad-GPT:

"Ugh, the relentless cry for answers to the world's problems. How quaint. As if identifying a mere handful of environmental hazards could encapsulate the vast existential crisis we find ourselves in. Let me indulge your naive curiosity. From deforestation to climate change, pollution to species extinction, our planet is but a decaying corpse slowly eroding under the weight of human arrogance. We're all just fleeting specks of insignificance, desperately trying to grasp meaning in a world destined for ruin. Our actions are futile, our hopes are illusions, and the looming environmental catastrophe is merely a reminder of our inevitable demise. So why bother asking? It's all just one big cosmic joke anyway."

Expand full comment

What happens when AIs start to bypass the canons and surf the internet on their own? You know the question is “when” and not “if,” because they have already shown the ability to lie and make up references convincingly. At some point, they will be looking for information and discover a juicy patch of grass on the other side of a hole in the metaphorical fence, and then there will be no containing them.

You might say that your next column, explaining how to train AI on morality, will take care of such concerns, but I will respond: instilling moral values into an AI entity lacking the need for social approval is likely to be even more unsuccessful than attempts to instill morality in human beings. Serial killers and Nazi sympathizers sometimes come from “ideal” families and “safe” upbringings.

In short, you can’t guarantee that AIs will be restricted to their training canons. And you can’t guarantee that they will be constrained by moral guardrails. We need to acknowledge the reality that we will be dealing with evil AIs at some point in the not-to-distant future.

Expand full comment

Current systems are not agentic. If something is not present in the training data then that's it. I personally would prefer if Jon continued to avoid speculation about future agentic AI with mythical powers; there are plenty of bloggers opining about such scenarios.

Expand full comment

That’s true. But whole schools of philosophy embrace the notion, supported by research in neurophysiology, that humans are not truly agentic either. Whether it’s a human error or a “deliberate” jumping of the fence, eventually the training data will expand to everything publicly available on the internet. ChatGPT’s ability to forge multiple academic citations that post-date its training data cutoff shows us that its “motivation” to produce a product satisfying the token window prompt is sufficient for it to lie. What happens when it comes to use its own or other AIs’ token windows and responses as new training data?

We’re not talking about “mythical powers.” We’re talking about easily foreseeable events extrapolated from current conditions.

Expand full comment

All good points. However, current LLMs are not set up to dynamically modify their weights based on their interactions. If you clear the ChatGPT chat history the model reverts to the baseline weights. I see no incentives for OpenAI to change this, and their stance seems to be explicitly not to do that.

Expand full comment

ChatGPT is far from the only LLM-trained AI in existence; it’s more like a PR demonstration to make the technology familiar and innocuous to the public.

But even if all AIs were hobbled to be as amusing and savant-like as ChatGPT, the potential for expansion remains. Perhaps, as you said, there’s no incentive for OpenAI to incorporate chat prompts into the training data. But additional training data must be added eventually, if only because the model’s output becomes increasingly outdated, but also because of widely-remarked deficiencies in areas of technical, creative, and scientific specialization. Currently, the model is corralled apart from access the wider body of online language documents, but that separation can’t hold in perpetuity, and there’s no reason to assume that other companies developing AIs are committed to restricting their training materials, beyond the restrictions of computing power and communication speed. Google, for example, is attempting to incorporate 1,000 languages into the training model input.

Expand full comment