Is machine learning in the enterprise mostly "snake oil"?
Media is a case study for ML adoption — and a warning
One popular take I see occasionally is that the vast majority of AI/ML out there in the wild is essentially snake oil — it’s overhyped and can’t actually do very much; its transformative impact is overrated.
This point of view has some support even among informed insiders, anchored as it is in the following realities:
There’s a lot of fake and/or shoddy “black box” ML being sold and deployed in many corners of our society, both public and private sector. This stuff either doesn’t work, or to the extent that it does work it’s actively and legitimately harmful.
Surveys, studies, and industry scuttlebutt all indicate that there are many corporate contexts where ML is just not providing much (if any) business value, despite the massive amounts being spent on it.
Both of the above are very much real parts of the present AI/ML picture, but they aren’t by any means the whole picture — or even its most important parts.
I have a unique perspective on this issue of “is AI real, or is it snake oil?”, because I have lived and worked through an entire industry’s transition to what we might meaningfully call “AI first.” My colleagues and I now depend directly on ML for core business functions, and indeed the way our industry is structured at every level — from business models to the actual day-to-day of how we experience our work — is now thoroughly inseparable from the work of a handful of large machine learning models deployed at scale.
I’m talking about the media, of course. Online news media is what it looks like when your industry is so thoroughly captured by AI/ML that the people working in it don’t even typically think of themselves as just one step in a set of ML-driven feedback loops.
The skeptic’s case
Before I get into what it means to be a truly AI-first company in an AI-first business, I should give some airtime to the skeptic’s case. Because it’s not wrong.
Here’s a good example of the kind of thing people inside AI will sometimes say, courtesy of a recent Tech Crunch piece on Google’s new managed ML platform:
“Machine learning in the enterprise is in crisis, in my view,” Craig Wiley, the director of product management for Google Cloud’s AI Platform, told me. “As someone who has worked in that space for a number of years, if you look at the Harvard Business Review or analyst reviews, or what have you — every single one of them comes out saying that the vast majority of companies are either investing or are interested in investing in machine learning and are not getting value from it. That has to change. It has to change.”
I don’t have access to the HBR report mentioned in that quote, but I do have a recent report along the same lines from MIT Sloan Management Review, Expanding AI’s Impact with Organizational Learning.
This report surveys a number of enterprises and finds that despite all the spending in ML and the professed commitment to centering strategy around it, it’s just not paying off for most respondents.
Our research — based on a global survey of more than 3,000 managers, as well as interviews with executives and scholars — confirms that a majority of companies are developing AI capabilities but have yet to gain significant financial benefits from their efforts...
Despite these trends, only about 1 in 10 companies — a group we call Leaders — report obtaining “significant” financial benefits with AI. (We provide a detailed explanation of how we define and calculate these benefits in the “About the Research” section.) What counts as significant depends on company revenues. In our 2019 report,2 we dis- cussed the fact that only 3 in 10 companies reported any impact with AI…
In my reading of this report, the take-home was that the more of your organization you can turn into an instance of the Everything Game, the more you win. As the report puts it:
Precision, speed, and learning are not a function of financial investment alone; they require large-scale organizational shifts in mindsets, processes, and behaviors. “As more and more of the core of a company is built around software and data, the nature of the organization changes,” says Marco Iansiti, the David Sarnoff Professor of Business Administration at Harvard Business School. Rather than just applying AI to specific cases, a corporate architecture based on algorithms and data enables an organization to not just use AI, or even extensively automate with AI, but to learn with AI. This volume of organizational change takes time and effort. “It’s an architectural transition that takes a lot of time for a traditional organization,” cautions Iansiti. “It’s a massive change.”
In other words, if you have a regular business that consists of humans arranged in some traditional organizational pattern, and it has a successful history of adapting to the market in the way that different collections of humans adapt to new situations (i.e., conflict, consensus, nudges, power games, etc.), then you cannot just inject a little ML into some part of this existing entity and enhance it cybernetically — sort of like putting some nanobots in your arm and making it a super-strong cyber arm that can lift cars.
No, you have to be willing to gamify the entire organization. Everything is quantified and evaluated with ML, and everyone is constantly checking scores and tweaking signals.
Here’s how the report describes the results of a truly profitable, transformative AI integration into an organization:
They’ve learned how to use human-machine interactions to refine processes quickly as circumstances change. They haven’t uncovered a single prescriptive structure for human and machine roles in processes; they’ve learned how to adapt human and machine roles to each situation. Growing collective organizational knowledge, in the form of both digital data and human experience, sup- ports ever-improving decision-making. Organizational learning with AI becomes a systematic, continuous process of improvement.
Think about the above prescription in light of two of my bullet points from an earlier post where I describe what it feels like to play the Everything Game:
I’m always either aware that I’m manipulating signals in order to make small, continuous optimizations for some target score, or I’m aware that others are doing it (and that it’s usually not in my interest for them to do so).
I’m constantly trying to learn a bit more about the structure of the game, either because I need to play it better or because I need to be more aware of how others are playing it so that I can evaluate what I’m seeing.
AI isn’t a new productivity tool you use to more efficiently do the same thing you were already doing. Sure, there are some niche applications where you can introduce it that way, but you get very minimal value for it. Rather, to get maximum value from AI/ML, you have to think of it as a new way of having an organization.
Of course, “new” isn’t always better, as we in the news business have learned to our sorrow.
Journalism’s AI revolution
I said in the intro that journalism is now an AI-first field. I want to be clear that I’m not mainly talking about internal reporting and ranking tools, like Chartbeat, or analysis tools like Google Analytics. Though those things are part of it.
No, what I really mean by journalism’s AI revolution is “Twitter and Facebook,” and the changes to the way every part of the story pipeline — from basic news gathering, to editing, to publishing, to promotion — that those two social media platforms have brought with them.
A successful newsroom in 2021 is not doing the same thing as a successful newsroom in 2011, except they’ve swapped out some older part of their news operation for Twitter. You can’t point to the specific thing some news org is now doing with Twitter that they used to do with some other thing, but slower or less effectively.
Instead, a news orgs ability to break news and steer into ephemeral flare-ups of traffic are meaningfully inseparable from their employees’ social media activity (especially but not limited to Twitter).
Reporters use Twitter to get leaks and insider info, track the development of stories, monitor expert conversations, and figure out what has zeigeisty traffic juice right now and what doesn’t, then they take all that and feed it back into Twitter to push the feed in a different direction.
We use Facebook to a lesser extent for the same kinds of news gathering, but as a news input, it’s more about reporting on different groups that use the platform — political parties, grassroots orgs, extremists, hobbyists, activists, and so on. Facebook’s more important role for news has historically been in the distribution and promotional parts of the pipeline, although newsfeed updates have changed that quite a bit over the past few years.
Then there’s a smaller constellation of social media sites like Gab, Parler, and to some extent Reddit, that act as inputs for reporting on niche communities — either via reporters’ passive monitoring or active networking.
The thing that makes all of these platforms “ML tools” is that the reporter’s experiences on all of them are mediated by machine learning, both in terms in inputs and outputs.
The kinds of content and people that get surfaced to reporters by the platforms, and also the kinds that get hidden, are all products of “The Algorithm.” That algorithm is constantly being gamed and meta-gamed by all the parties on the platform, and it’s constantly being monitored and tweaked by the platform’s owners. All reporters at all successful, high-traffic sites, then, see what a handful of algorithms show them; and they all miss what a handful of algos do not show them.
It’s not just the inputs to reporting that are shaped by the algo, but the editing, packaging (i.e., headline writing, lede crafting, hero image selection) are all done with the algo in mind. We’re always trying to tune our content for more distribution and engagement. Anyone who tells you that this is a “myth” or not true, is either misinformed or lying. Social media algorithms are as vital a part of any successful news team’s target audience as actual human readers, and editors know this and act accordingly.
The end result is that the whole news process is now a massive nest of ML-driven feedback loops, filtering, signal processing, and continuous optimization.
Machine learning is so thoroughly baked into the very DNA of the modern news business via social media that if a news organization tried to ignore the algorithms and do things the way they’d have done them before maybe 2010 or so, it would have no traffic and no money.
So if you want a picture of what “success” looks like for ML adoption, I submit that the news industry is a really good model to examine. Sure, news orgs are dependent on publicly accessible ML models, and not boutique in-house models, but I’m not convinced this necessarily makes such a big difference in overall impact.
(It’s sort of like the on-premise cloud vs. public cloud debate — there are real implications for budgeting and security, but for end-users, it’s maybe not such a big difference.)
I should qualify the above by saying that I don’t expect every area of business to be amenable to the kind of ML disruption I’ve just described. I’d expect there are some businesses where ML is currently being invested in where it will just not pan out, and the business will drop their ML spending but will keep being successful because ML just isn’t a good fit for that market— at least in is present incarnation.
minor typo; currently says "is that the reporter’s experiences on all of them are mediated by machine learning, bot in terms in inputs and outputs."
believe you mean "mediate by machine learning, both in terms of inputs and outputs."
Human learning isn't working out too good either