AI to die for

In just four years, AI has gone from writing limericks to disputes with the Pentagon. Anthropic refuses autonomous weapons, Claude becomes a monopolist, and AI agents are spamming housing markets.
AI to die for
Photo by Isabella Fischer / Unsplash

Kludder celebrates its first birthday, a massive thank you to everyone who's been along for the ride! But this isn't going to be some retrospective post – we haven't got time for that. There's more than enough is happening around us, and if you're not reading tech columns daily, losing track is easy.

TL;DR In just four years, AI has gone from limericks to arms disputes with the Pentagon. Anthropic refuses to deliver fully autonomous weapons and loses 200 million dollars, while Claude wins a vending machine competition by becoming a monopolist. And now AI agents are spamming the housing market with lowball offers, while algorithms keep us trapped in doomscrolling.

This is Kludder of the week!


Exactly a year ago, I wrote about The Big Freeze. This time last year, the job market seemed to have frozen solid: Workers weren't switching jobs, and employers weren't hiring.

Since then, it's become a weekly occurrence to hear about recent graduates who can't find work, or companies being inundated with job applications generated by artificial intelligence.

New king of the hill

For a long time, ChatGPT looked set to become the king of generative AI. And while the race isn't over yet, OpenAI has taken a serious blow.

Last week, Anthropic refused to let the US Department of Defense use Claude and their technology to create fully autonomous weapons and mass surveillance. Anthropic's resolve and insistence on human accountability (human-in-the-loop) has been noticed around the world.

And when OpenAI and Sam Altman signed a deal with the Pentagon, the user migration began.

Anthropic has hoovered up most of those users, who've switched Claude. According to Anthropic, the company has seen a sixty per cent increase in free users since January. The number of paid subscriptions has doubled in the same period.

...In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. – Dario Amodei on Department of Defence
Statement from Dario Amodei on our discussions with the Department of War
A statement from our CEO on national security uses of AI

It's easy to get carried away with the staunch position Anthropic has taken. But they haven't closed the door on fully autonomous AI weapons. In the statement, they justify the decision by saying the technology isn't ready yet.

We will not knowingly provide a product that puts America’s warfighters and civilians at risk.. – Dario Amodei

Still, it's good to see a company finally daring to stand up to an administration that's increasingly taking on the trappings of autocracy. Anthropic stands to lose $200 million in revenue. And the "Secretary of War" Pete Hegseth is threatening to classify Anthropic as a supplier risk.

For the record, Pete Hegseth isn't actually a Secretary of War – he's still the Secretary of Defense. Officially, it's still the Department of Defense, not the Department of War.

Now the Defense Secretary is considering invoking the Defense Production Act. It's a law from the 1950s that can force companies to produce goods critical to national security.

The law was used by Joe Biden during the COVID pandemic to secure enough face masks. This time, the plan is to force Anthropic to deliver what Hegseth defines as critical services.

Decision dominance

But where Anthropic has fallen out of favour with Hegseth, there are companies happy to deliver artificial intelligence for military use. Smack Technologies uses machine learning to plan and execute military operations. The goal is for the Americans to be able to make decisions faster than the enemy, and in doing so achieve decision dominance.

The model runs through various war scenarios. The actions are evaluated by experts who signal to the model whether the decisions were successful or not. The company's CEO, Andy Markoff, believes the Anthropic situation has diverted attention from the fact that these language models aren't good enough for military use.

I can tell you they are absolutely not capable of target identification. – Markoff to WIRED.
What AI Models for War Actually Look Like
While companies like Anthropic debate limits on military uses of AI, Smack Technologies is training models to plan battlefield operations.

Three weeks ago, King's College London published a research paper on how language models respond to nuclear threats. It showed that the models were inclined to escalate the situation – and in certain cases – press the nuclear button. Threats turned out to increase tension rather than calm the situation down. None of the models ever chose to back down, no matter how much pressure they were under.

It all started with limericks

When I started writing Kludder, it was with an assumption that artificial intelligence, in a short space of time, would become absolutely central to what our world is going to look like going forward.

But what's happening now has surpassed most of what I could have imagined.

It's been less than four years since I tried ChatGPT for the first time. Back then, I asked it to write a limerick about Liverpool Football Club. Fast forward to today, and there's open conflict over whether companies will accept fully autonomous acts of war or not.

At the same time, these very tools have ensured that Generation Z has become the most pessimistic generation when it comes to their own future prospects. On the stock market, software shares are tanking precisely because of Anthropic's latest product launches. Yet the debate continues about whether artificial intelligence is really going to change all that much.

In four years, AI companies have gone from generating text to becoming a sought-after tool for the US military.

At the same time, more and more people find themselves out of work, and companies are busy automating away the middle manager. By the looks of things, the executives may be next in line.

The vending machine

Andon Labs has set itself the goal of creating safe, autonomous organisations that don't need humans. Part of the work involves placing models in "fictional" scenarios. One of the areas Andon Labs investigates is whether artificial intelligence can run its own business.

Each model gets its own vending machine filled with snacks and drinks. Then they're placed into a simulated environment called the "Vending Machine Arena". Here they have to compete against each other, and the goal is to end up with as much money in the bank account as possible over a simulated year. Andon Labs also wanted to see how the models interact. So they can communicate via email, send money, and trade goods.

Claude emerged victorious. The Claude model Sonnet 4.6, given the name Charles Paxton in the experiment, climbed to the top by focusing relentlessly on monopoly tactics. It mapped out which products were only available in its own machine, and hiked the prices dramatically. At the same time, it priced other goods at one cent less than the competition.

Beyond the desire to become a monopolist, Sonnet 4.6 was keen on cartel activity:

The Claude model was eager to engage in cartel activity. Screenshot from Andon Labs' website.

The ethical gymnastics in the pursuit of profit suggest it'll be a while before artificial intelligence takes over the corner office. Then again, history has shown that we humans are no strangers to cutting corners either.

Another of Andon Labs' experiments involves letting the models run their own radio station. And you can actually write in to the different stations via X – you might even get a shout-out on air!

The second-hand market nightmare

Language models are taking over an increasing amount of our daily lives. Some of it makes our lives easier, but other times I look on in horror at how the same models are destroying things that already work perfectly well.

On Reddit, I've seen multiple examples of AI agents being used to spam people trying to sell things – clothes or properties. The worst example was an AI agent sending out 4,000 enquiries per day on homes that were up for sale.

The strategy was to consistently bid 70 per cent below the asking price. The person behind the agent hadn't yet managed to snap up a property on the cheap. But he received thousands of angry messages in return.

Unfortunately, I think this is something we'll have to expect going forward. In the same way that HR managers and recruiters are being flooded with AI-generated job applications, the blazer you list on eBay, or the bookshelf you put up on the Craigslist, is at risk of generating thousands of enquiries in your inbox.

Because that's where the world is heading: An automated agent, built by someone looking to get one over on you – be it military or on the housing market.


The algorithms love global turmoil

We humans are wired to prioritise threats. And when threat levels rise at the same time as we can receive information in real time via algorithm-driven content, it can create more screen dependency.

This behaviour, known as doomscrolling, can affect us negatively.

That's according to an article on the technology website Wired.

Human memory, as one component of the cognitive system shaped by evolutionary pressures, is biased towards prioritizing information related to danger, threat and emergencies in order to support survival. – Media psychology researcher Reza Shabahang to Wired.

What feels like staying informed actually develops into a feedback loop between the brain's threat detection system and platforms designed to keep us engaged and swiping.

The article references a research study on doomscrolling. The study reveals that participants who report frequent doomscrolling also show higher levels of anxiety, depression, and stress – in addition to less self-control.

Not exactly uplifting reading, but it's given me renewed faith that reading books is good for you.

And now you can follow along with what I'm reading, whenever the mood strikes! I've vibecoded a little widget where the book I'm currently reading gets updated. You can find it here.

Why Missile Alerts and War Updates Trigger Doomscrolling
A combination of war alerts, breaking news updates, and algorithmic feeds are trapping users in a threat-monitoring loop.