A guest post on addressing non-existential AI risk scenarios
Excellent essay! Thank you so much for clearly dissecting the problems at hand and offering very practical solution proposals. Moreover the importance of this essay cannot be overstated in terms of relocating the problem of AI alignment from the realms of „esoteric existentialism“ (however probable one might think such scenarios to be) to very real and certainly highly probable scenarios in the not too distant future.
A really good bit on "foundational knowledge". It definitely produced its fair share of quacks and pseudoscientists back in the day (and in Germany or Russia), but there does seem to be a palpable reduction in the rate of breakthroughs (unless mankind is nearing the exhaustion of the laws of physics to discover).
Is Greg Fodor's argument have anything to do with Mo Gawdat's? Gawdat's seems to be roughly about curating training data... (Apologies if I'm talking nonsense, my IQ is way too low for these matters.)
Having worked in the SEO industry for 15+ years your 'white-hat' vs 'black-hat' comparison was spot on.