Dear AI ethics people, I have questions. Lots and lots of questions, the main ones being these:
What qualifies you to decide for the rest of us what kinds of ethics AI should have? Do you come from a particular faith or philosophical tradition that you draw your ethical framework from? If so, are you recognized by others in that tradition as a voice to be listened to on these matters? Why should I care what you think about anything ethical? Convince me.
Do you know the difference between ethics and morals? Because a lot of what you’re talking about are individual actions and attitudes, internal mental states and unconscious habits of mind, matters of basic character — all that sounds like morals, to me. And as a Christian, I have my own opinions about morals (and about ethics). Indeed, I often think your morals are corrupt, at least by the standards of my own tradition. The ways you address others, the content of what you say, and some of the specific policies you advocate for all strike me as morally bankrupt from time to time.
Why is it ever the job of any engineer or researcher to think in a detailed, sophisticated way about ethics? I did an engineering degree, and neither I nor my peers had serious ethical training. In my career, I’ve known many programmers, engineers, mathematicians, and scientists of different types, and passing few of them are sophisticated philosophers or spiritual authorities. Why do you think ethics belongs on the plate of these professions?
Why is it ever the job of a private company to police the morals of a population? Who gives a company the power to separate truth from lies, or healing words from harmful ones? Why is the proper moral and cultural formation of a democratic citizenry any private enterprise’s responsibility?
Why do you complain that engineers think they can fix the world with code, and also enjoin those same engineers to try and fix the world with their code? Are they supposed to step back and not presume to “fix” certain larger social problems with code, or are they supposed to tweak their code so that it fixes certain larger social problems? Make up your mind.
I have more questions, but I’ll stop here for now.
The latter questions in that list were raised for me by this tweet:
Yann LeCun @ylecunA blog post on algorithmic fairness work at Facebook, and a research paper on the topic. Paper: "Fairness On The Ground" https://t.co/InVLlshw7Q Blog post: "What AI fairness in practice looks like at Facebook" https://t.co/sUmXKyAYu9
I would love for Cathy O’Neil, whose work I admire and whose book I recommend, to explain to me in an interview why she thinks Yann LeCun was wrong to object that big-picture issues of ethics and fairness are above his pay grade. Because I get no indication from his CV or his Twitter feed that he’s an ethicist of any note or a moral thinker of any particular depth.
LeCun seems to me to be a brilliant, pioneering AI researcher, and someone who I’d look to for answers about AI. But I would not look to him for answers about what kind of world we should all live in, or what “fairness” looks like in any given context, or what ethics should govern the development and use of which technologies. Why should I?
I understand that many in AI ethics believe they’ve made the case that the responsibility for fixing the crappy state of our present world falls at least partly on the shoulders of AI researchers, but I think they have not made it. Indeed, I think they haven’t really tried.
What they’ve done quite effectively is point out that AI has the potential to make things considerably worse in certain areas. But identifying a problem is not the same as making the case that STEM professionals are morally obligated to fix it.
Why is this not much more the responsibility of the MBAs? The product managers? The VPs? The CEO? What is the case for convincing the engineers to tackle these big-picture questions?
Is Facebook really a force for polarization?
I said in yesterday’s post that I’d be willing to cede the existence of a link between political polarization and user engagement on social media. This fits with my own personal experience of these platforms, and with what I know of them. It certainly fits with the media reporting on these issues, which is quite uniform in identifying the rise in social media use with all kinds of ills.
LeCun, though, responded by trying to make a case that there is no such link and that claims of such a link are not supported by data.
In a tweet that’s buried in the Twitter threading, LeCun cites this paper to make the case that political polarization and social media use are not, in fact, linked.
He also tries to make the case for AI as a moderating force within Facebook — a way to fight polarization:
In a nutshell, LeCun is arguing that AI can greatly increase the effectiveness of human moderators, by acting as a kind of force multiplier via machine translation and automated early detection of malicious content.
Like the link between Facebook and polarization that LeCun denies, this positive use of AI internally to Facebook seems plausible, to me. I can believe they do use it this way internally, and that it’s a net social good in that role.
I wish there were more in-depth reporting on the social upsides of AI, specifically the kind of thing LeCun is outlining here. But there isn’t. The AI press is dominated by crusaders who share one agenda, and that agenda is best summed up in the chant heard on the streets this past summer: “If we don’t get it, burn it down.”
I actually think this agenda is why there’s relatively little talk of government regulation in AI ethics conversation and so much emphasis on the identity characteristics and tribal allegiances of the employees in specific roles at a handful of companies. You can’t capture Big AI via the government, because the field is too obscure and moves too fast for lawmakers to follow; so you try to “get it” by capturing the companies and the guild. And if that effort fails, there’s always the Molotov.
Tribe trumps all
Facebook released a big paper on fairness and bias in its AI systems yesterday, which sparked the Twitter discussions I’ve highlighted above.
The content of the paper is mildly interesting, and I may have more to say about it later. But what I’m really watching for is the reception of it in the AI ethics community.
This paper is very much an “AI ethics” paper of the kind you’d find coming out of the ACM FAccT group, but it has one flaw that may prove fatal: it came from Facebook, and not from the FAccT clique.
And there is a clique. I’ve heard that there were a bunch of irregularities around submissions process this year and that it’s essentially a closed publishing venue for the circle around the Google ethical AI group.
I’m looking to report more on that, so if you have details to share, please get in touch.