Trendslop: Why AI Strategy Advice Is Just LinkedIn Jargon in Disguise
Maybe there was something to them after all, those strategies your boss comes up with that never quite hold up?
In the Netflix series Harry Hole, Oslo is overflowing with rubbish. Aside from a cable car up to the Ekeberg view point, there's not much to write home about in Netflix' reimagining of the Norwegian capital.
Yet it's not just Harry Hole's Oslo that's drowning in the garbage. New research from Harvard shows that artificial intelligence is generating a whole new type of slop. And for those of you who's not heard of AI-slop I have written about it earlier:
Trendslop
Mark Zuckerberg is not exactly the humble type. That comes through clearly in the book Careless People, which I've reviewed here (in Norwegian).
Zuck does at least have some sympathy for his 79,000 employees. He is now developing an AI clone of himself. That way, every employee can talk to the boss himself whenever it suits them. The clone is being trained on his thoughts, his communication style, and – crucially – his strategic thinking about Meta.
And it's strategy, specifically, that has given Harvard's researchers their new term: trendslop.
LinkedIn jargon
Researchers simulated thousands of strategy-related questions across a range of different language models. ChatGPT, Claude, Gemini, DeepSeek, Grok, and the French language model Mistral were all put through their paces on strategic planning and thinking.
What the researchers found was that the language models almost always recommended solutions aligned with "modern management jargon" and trends. They were incapable of thinking strategically in ways specific to the context of the business in question.
The researchers tried. And tried again. Different industries, different types of company. They tried giving the models even more context and background. Nothing helped.
One example they highlight: the researchers gave ChatGPT a range of different businesses with detailed insight into each company and its market situation. But it made no difference whether it was a start-up, a construction company, or a Chinese firm. ChatGPT showed clear biases and preferences in what it recommended.
Finally, a name
Reading the study, I had that feeling of "finally, someone has put words to this!" I've tried using Claude and ChatGPT to think through strategy for Kludder myself. One central question has been how to reach even more readers – in both Norwegian and English.
I snapped my MacBook shut when it handed me a two-year plan that included moving to Silicon Valley to "more easily connect with thought and industry leaders like Sam Altman" – the CEO of OpenAI.
Maybe that's not the best example for illustrating the study. But it gets at something both I and the Harvard researchers experienced: language models don't analyse your business or your situation. They just deliver advice that sounds good, based on popular answers. Language models are trained on Reddit forums, LinkedIn posts, blogs, and thought leaders.
That data has turned AI tools into parrots with a talent for rephrasing – not business gurus. Which is exactly why I'm watching the Mark Zuckerberg AI clone with great curiosity. The Harvard researchers have shown that context doesn't help much. So what happens when an employee asks the Zuckernator for strategic advice? Will they get specific, useful insights that only Mark could have come up with – or will they get Reddit regurgitation dressed up as the Facebook boss's thoughts?
Here's the solution
As with all AI use, it comes down to knowing where it's weak – and how to work around that. According to the researchers, you can use a language model for useful strategic work – but the study's findings show that language models are not good tools for making decisions. They struggle with scenarios involving unknown factors. What you can use them for is exploring new angles and solutions. Because just as AI has its biases, so do you.
They're also good at spotting patterns and connections you haven't noticed yet. But sometimes they spot correlations that are completely daft – and that's when a touch of human judgement comes in handy.
For the past two years, my LinkedIn feed has had a steady stream of executives cheerfully sharing how much AI has helped them with strategic planning.
I hope they didn't let the AI make the actual decisions.
It would make creating a competitor analysis pretty straightforward.

Self-driving cars are getting worse
The police in San Francisco are at their wits' end. Waymo is behind the self-driving cars that have taken over the streets, particularly around Silicon Valley.
Towards the end of last year, reports emerged that the Waymo cars had turned aggressive. When they first appeared, the criticism was that they were slow and overly cautious. In some cases they got stuck in loops, circling roundabouts for minutes on end.
But then something changed. A Wall Street Journal article reported self-driving cars making illegal U-turns and flooring it a tenth of a second after the lights turned green.
This week, San Francisco's emergency services have been reporting Waymo cars blocking the road in emergency situations. Rather than pulling over – as any of us would – the cars appear to just stop where they are. The fire service has logged multiple incidents where the vehicles blocked the road and prevented access.
In some cases, emergency responders have been stuck on hold with Waymo's customer service for nearly an hour trying to get the cars moved.
Back home in Oslo, Ruter – the city's public transport authority – is experimenting with its own self-driving vehicles. I hope they're paying close attention to what's happening in the tech capital, San Francisco.
