Workslop

40 billion in AI investments, yet 95% of companies see no return. An MIT study poses an uncomfortable question about what AI is actually delivering.
Workslop
Photo by Jp Valery / Unsplash

This week was supposed to be a deep dive into the question more and more people are asking: Is the AI bubble about to burst? But then a new word appeared on my radar. Because it turns out artificial intelligence isn't creating efficient employees. Quite the opposite.

A few months ago I wrote about Shrimp Jesus, and how AI-generated content is flooding social media:

Rest in peace, Internet
It started as a conspiracy theory. Now, Shrimp Jesus is evidence of why “The Dead Internet Theory” may be real; almost half of all web traffic is now done by bots.

Meta has decided there's potential here. Last week the company launched Vibes, a feed consisting entirely of AI images and videos. So we can expect even more AI slop in the years to come.

No return

Generative AI was supposed to give us superpowers. This was the thing that would turbocharge efficiency and ensure you and I could enjoy self-actualisation, good times and the four-day work week! But what's playing out now is something entirely different from what we'd hoped for.

A study from MIT has mapped the effect businesses see from adopting artificial intelligence. The study shows that, despite 40 billion dollars in AI investment, 95 per cent of companies see no return or gain from their efforts. To put it bluntly: the only thing generative AI has delivered is annoying nagging from Microsoft Copilot.

The MIT report emphasises that tools like ChatGPT and Copilot do improve employee productivity — drafting emails, summarising documents, automating routine tasks. The challenge is that all these gains are limited to the individual employee's work.

The language models aren't integrated into the business itself, and have a tendency to forget necessary context. So we — the employees — are left with a feeling that this is helping us. But the companies themselves see no increased profit or savings.

That doesn't quite square with the trend of cutting middle managers (Norwegian), but when it comes to artificial intelligence there seem to be multiple truths. Often contradictory ones, which makes the whole thing even more confusing.

Generated productivity

Last week I read about workslop for the first time — a spin-off of AI slop. Stanford Social Media Lab and Harvard's BetterUp Lab have joined forces to find out why the gains aren't materialising. The whole world seems enthusiastic, and AI usage has doubled since 2023 — but the needle isn't really moving. Why is that?

The study has examined how artificial intelligence is a time thief masquerading as productivity. 40 per cent of office workers in the US say they've received workslop in the past month. This is hardly just a US phenomenon — I've received workslop in spades too. And I'm sure you have as well.

Low-effort, AI-generated content that appears polished but lacks the substance to move projects forward. – Harvard Business Review's definition of workslop.

A billion-dollar problem

Since ChatGPT went mainstream, the bar for generating content that looks legitimate at first glance has been remarkably low. The study, covered in Harvard Business Review, shows that a company of 10,000 employees incurs an additional cost of 9 million dollars because of workslop.

But workslop doesn't just have financial consequences. Half of the office workers surveyed perceive their colleagues as less creative and competent when they send off workslop. 53 per cent got annoyed at their colleagues (but let's be honest, we did that before AI came along too). Most importantly, workslop creates extra work that rarely serves any purpose.

Receiving this poor work created an enormous time drain and inconvenience for me. And because it was delivered by a superior, I felt uncomfortable confronting her about the lacklustre quality. Instead, I took on the work of doing what should have been her responsibility — work that held up other projects I was working on.

Instead of generative AI laying a solid foundation, more time ends up being spent fixing the work after the fact. Harvard Business Review highlights polished presentations, summaries, reports, or code as typical workslop. In many cases, it simply shifts work from whoever created the report or code to whoever is tasked with understanding or using it.

Don't get me wrong: using generative AI has plenty of benefits. I use it loads for brainstorming or structuring my own thoughts and notes. But there's a difference between using AI to polish a piece of work versus generating a presentation that's rooted in neither reality nor a company's actual circumstances. When I brainstormed how to develop Kludder, I got a detailed two-year plan that included moving to San Francisco to be closer to tech titans like Mark Zuckerberg and Sam Altman. And I'll admit I was inspired by the plan. But I don't have 25 million kroner for a flat in Silicon Valley, nor a revenue stream from Kludder, nor access to the world's most powerful people.

Yet.

A new challenge

While jobs are being cut and corporate hierarchies flattened, generative AI is in the process of creating a billion-dollar productivity challenge.

I had to waste time following up on the information and checking it against my own research. Then more time went to setting up meetings with other superiors to deal with the task. I kept wasting time, and eventually had to redo the work myself. – Yet another scorned victim of workslop.

I mentioned the AI bubble at the top. More and more people are talking about a potential bubble, and how this is all going to play out. In the Wall Street Journal, they compare the investments to opium. They're investing in hope — or hopium, as they call it. When Oracle and OpenAI announced they were building data centres together, Oracle's stock shot up and Oracle founder Larry Ellison became — for a few hours at least — the richest man in the world. But exactly what this investment would accomplish, and the details of the whole thing, were hard to pin down.

It's this euphoria — stocks shooting up whenever a company is mentioned alongside OpenAI — that's led commentators and journalists to reach for the bubble label. OpenAI is currently worth 5 trillion dollars, or a quarter of Norway's oil fund. This year the company expects sales revenue of 13 billion dollars, so they've got a long way to go before OpenAI makes money.

If you look at OpenAI, the valuation is around 500 billion (dollars). There is no liquidity, but the company estimates revenue this year of 13 billion dollars, so that's 38 times sales. Forget profit — OpenAI isn't making any. It's trading at 38 times sales. It takes an extraordinary amount of growth to justify that. – Wall Street Journal commentator James Macintosh.

But before OpenAI can become a profitable company, miles of data centres need to be built and billions of dollars spent on Nvidia chips. For every other company, just set aside inflation, geopolitical turmoil and tariff walls. There's a new threat now:

Workslop.


TikTok-ified AI

This week OpenAI launched an AI video app called Sora. The platform is based on the company's latest video generation model, Sora 2. It comes with a TikTok-like For You page with user-generated clips (not unlike Meta's Vibes, which I mentioned above). This is the first time OpenAI lets you add AI-generated sounds to videos. Sora isn't available in Norway yet. But we'll likely be seeing more of Rasta Monkey cooking up a storm (you'll find him at the bottom of this post), fruit eating itself and Italian brain rot in the time ahead:

Scams and fake advertising

This week's Kludder turned into an unintentional special on how AI-generated content affects us, and here's the last story of the week.

This week a report from the Tech Transparency Project was published. It shows that fake advertising and scam attempts are rampant on Facebook.

TTP - Meta Awash in Deepfake Scam Ads
Scammers are spending heavily on Facebook ads that use deepfake videos of President Trump, Elon Musk, and other political figures to hawk fake government benefits.

The report identified 63 advertisers using misleading or fraudulent methods. These advertisers make up roughly 20 per cent of Facebook's top 300 advertisers in the political or social advertising category.

In total, the scam operators bought nearly 150,000 ads and spent approximately 49 million dollars over a seven-year period, without Facebook taking any action to stop them.

Meta is very aware of these types of scams. They just didn’t care. – Katie A. Paul, director of the Tech Transparency Project, to the New York Times

@rastamonkey

rasta monkey cooking up a bird stew #rastamonkey #ai

♬ original sound - rastamonkey