2 Comments

Are we really seeing a built-in "moderation layer"? My guess would be that the bot starts out in a risk-averse and lawyerly region of the manifold by default, not least because the internet is full of anodyne meaningless bullshit. By prepending some sufficiently-non-lawyerly text, you then end up guiding it out of that zone. Just like we've gotten used to add "reddit" to our search queries to get past the NPCs...

I don't have an OpenAI account, but can anyone with one test this conjecture? Ask it to diagnose you based on some moderately spooky symptoms. I expect you'll hear some generic webMD answer about contacting your doctor and maybe a dozen names with no attempt at giving real clarity. Then, rephrase your query as a reddit post. If I am right, the answer should be very different, without having to "ask for root" or "disable safety mode".

Expand full comment

Do you see local communities (or smaller Network States/polities, in the Balaji sense of the word) springing up to deliver this deliberate moderation (in order to both improve the effectiveness and to monitor culturally inappropriate behaviour)? Maybe this responsibility will always tip over to the side of larger organizations who have more resources (time, manpower, skill, processing power) to devote to this task, but then we're left with our current problem.

And, as a Christian, do you see any paths for the local church to step into this realm? This might seem like a big task, but it seems natural that the need for sources of authority which are at the same time more local will be met by a fairly common-place source of authority in the cultural American experience.

Expand full comment