I’ve been offline until today, so this week’s output is much lighter than I’d like. I have some really interesting new research to highlight for you, once I get through it, myself. But in the meantime, here are two quick takes that I’ll develop further in future posts.
Facebook’s AI Whack-A-Mole games
Karen Hao at Technology Review has a new, in-depth piece out on Facebook's internal AI ethics efforts. The piece is deeply reported, and very much worth reading.
I should clarify before going any further, though: to say I am not a fan of Hao’s coverage is an understatement. Her work is one-sided, and she’s openly doing anti-tech and anti-AI activism under the guise of “journalism.”
With that out of the way, you should read the piece. There’s a lot there worth thinking about.
To zero in on just one thing, I find it impossible to take issue with the following:
Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
The above is all of Big Tech and AI in a single paragraph. To boil it down even further:
Large public companies are driven by a growth imperative, and all else is subordinate to that.
The internal AI watchdog and ethics groups are primarily there to head off external regulation. (All corporate self-regulatory efforts have this purpose since the beginning of capitalism. If you thought it could possibly be otherwise in AI, I dunno what to tell you.)
Obvious biases against specific groups — black people, conservatives — have a specific set of qualities that make them attractive for self-regulatory efforts inside companies to tackle, and indeed the way the AI ethics folks talk about the problem feeds into that dynamic.
On this last point, read my first post for much more on this topic of bias in AI. I won’t recap any of that, but to extend that material, I think the dynamic at work is probably something like this:
Take the examples we’ve seen of clear-cut cases of blatant racial bias in AI, especially in image processing and/or synthesis. I suspect these examples are to “AI harms” what mass shootings are to “gun violence” — the stuff of viral threads, alarming headlines, and committees on how we must Do Something. The truly insidious bias problems lurk in areas like medicine, finance, and law enforcement, but those don’t go as viral as something like the “white Obama” image.
Activists push these obvious algorithmic bias cases as a way to get attention to the issue, even though they aren’t anything like the bulk of the problem. And for their part, companies like tackling these high-profile examples because it’s pretty easy to offer a one-off fix for a specific model that lets the company look like it really did something while leaving the larger structural problems in place.
If you go a bit further down in the piece, you’ll see this bit that dovetails on the above point:
One such project heavily pushed by company leaders involved predicting whether a user might be at risk for something several people had already done: livestreaming their own suicide on Facebook Live. The task involved building a model to analyze the comments that other users were posting on a video after it had gone live, and bringing at-risk users to the attention of trained Facebook community reviewers who could call local emergency responders to perform a wellness check. It didn’t require any changes to content-ranking models, had negligible impact on engagement, and effectively fended off negative press. It was also nearly impossible, says the researcher: “It’s more of a PR stunt. The efficacy of trying to determine if somebody is going to kill themselves in the next 30 seconds, based on the first 10 seconds of video analysis—you’re not going to be very effective.”
It’s the same combination: a dramatic, high-profile problem that’s (thankfully) quite rare, and an effort to publicly be seen as working on “the solve” for it.
Of course, the big bias problem Zuck was worried about at Facebook wasn’t racial bias: it was political bias. Specifically, the charges of anti-conservative bias coming from Trump and the GOP.
Nonetheless, the problem of anti-conservative bias has all the features identified above: it’s relatively easy to brute-force a fix, usually via a quota of some type, that addresses whatever immediate problem that has been identified while leaving in-place the larger structural issues that created the problem.
The main conclusion I draw from all of this is that big companies actually like playing Whack-A-Mole with some types of problems. If you can stand up an internal watchdog group to play an endless game of Whac-A-Mole on a string of tractable-seeming problems, then you have something to testify about when you get periodically called before the authorities.
I have mixed feelings about all this because I do not believe it is Facebook’s job to “fix” the larger social problems that activists like Hao and the AI ethicists want to be solved. I don’t want private companies like Facebook trying to remake society along activist lines, primarily because I disagree with the activists and I do not share their goals. I’d rather Facebook continue to play Whack-A-Mole than see it taken over by kinds of people and ideas Hao aggressively promotes in her articles and on her feed.
And yet, I’m totally willing to cede the reality of the link between engagement and political polarization that an internally suppressed Facebook report supposedly found, and I think if left to their own devices social media platforms may “engagement” our civilization to death.
Like the AI researcher at the center of Hao’s article, I have to admit that I don’t know what the answer is.
For AI, bigger is better
Rice’s Moshe Vardi has published an editorial in the latest Communications of the ACM arguing for Big Tech to be cut down to size by antitrust actions. He writes:
The issue has always been “large,” not “tech,” but the connection between large size and tech stands out.c In 1901, President Roosevelt asked the U.S. Congress to curb the power of trusts because of their size: “Great corporations exist only because they are created and safeguarded by our institutions,” he said, adding that it is “our right and our duty to see that they work in harmony with these institutions.” Anti-trust law enforcement has served us well over the past 130 years. With market capitalization of the top five Big Tech corporations now at over USD7T,d the people, working through governments, are carrying on this anti-trust law legacy. It should be welcomed!
I am and have been an ardent proponent of using antitrust tools to break up Big Tech, but I will say that lately there is one wrinkle that gives me a bit of pause as I try to think this through: when it comes to AI, size matters in ways that it just doesn’t for many other categories of technology.
The present-day era of consumer voice recognition, image classification, AlphaGo, and synthesis technologies like DeepFakes and GPT-3 kicked off fairly recently. The AI community struggled through an AI winter that finally, in about 2010, gave way to the present-day AI spring.
What was the breakthrough that led to the rush of innovation? I’ll have a lot more to say about that in a future post, but I can sum it up for now in one word: size.
Most of the math and basic architecture behind machine learning hasn’t changed fundamentally since I took classes in neural networks and computer vision in undergrad in 1998. We’re actually not much closer to anything like a real copy of any part of the human mind than we were when Netscape was the hot new browser.
Rather, what we’ve figured out since then, is that if you take a set of neural net architectures that are essentially toys, and throw enough advanced computer hardware and terabytes of training data at them, they magically turn into something that feels fundamentally different.
Put another way, machine learning is one of the only places where sheer quantity translates directly into a dramatic increase in quality past some threshold. The bigger the model — measured in both the size of the training dataset and the amount of hardware and energy it takes to run it — the better the results.
This bigger-is-better reality has some important implications for society, especially in the area of antitrust. The size of the models needed to really produce the kind of spectacular output we’re seeing with GPT-3, means that bigger companies with more hardware and deeper pockets will own this rapidly developing space. It cost some $4.6 million just to train GPT-3 — not to hire the researchers to build it, or to buy the hardware to run it, but just to train it.
With these kinds of up-front costs, AI currently has a kind of mainframe-era feel to it. You really need big machines with big budgets to do big projects. It is not two-guys-in-a-garage-in-Palo-Alto friendly. The cost of running and training these large models needs to lose a decimal place or two before it gets there.
The unreasonable effectiveness of sheer scale in ML may mean that any business built on it lends itself to a kind of natural monopoly situation.
I’m an anti-monopolist, so I don’t take this to be good news. I'm guessing the ideal way forward here is some kind of government-run hardware that qualifying startups can rent time on at a massively subsidized rate. But the usual suspects would want veto power over which projects qualify and which don’t, and unlike in the private sector they’d have a reasonable case for getting it.
Again, I don’t have answers. These are hard structural problems.
One thing that frustrates me about the whole dialogue is ever since The Social Dilemma people have presumed that engagement algorithms are the problem, and therefore getting rid of them will fix the problem. But that's just not true. They may exacerbate the problem, but we see the same sorts of fake news, radicalization in both directions, and echo chamber isolation on Reddit where the algorithm is nothing more that "upvotes make the article go up," or on college sports forums whose only algorithm is the "bump."
The truly brutal algorithm is buried in our minds. Minds which were never ever intended to be this connected.