Deleting ChatGPT and Switching to Claude Won’t Save Your Soul

Deleting ChatGPT and Switching to Claude Won’t Save Your Soul

The ethics theater of AI — and why switching LLMs changes less than you think

Over the past week, an interesting ritual has spread across tech Twitter and LinkedIn. I see Rutger Bregman, my fellow Dutchman and an author and thinker I deeply respect and admire, as one of the greatest proponents of it, at least in my personal timelines.

People are deleting ChatGPT. Switching to Claude. And mobilizing others to do the same. 

Not quietly — but publicly, often accompanied by a small declaration of moral clarity:

“I’m switching to Claude. It seems to be the only ethical thing to do.”

Fair enough. The reasoning seems simple.

OpenAI is too commercial.
Driven by shareholder value.
Too compromised.
Too close to governments.

Anthropic, on the other hand, is framed as the more responsible AI company — the one that actually cares about safety and ethics.

But if the history of Silicon Valley teaches us anything, it is this:


When a tech company draws a moral line, follow the money first —

and ask questions later.


Because the uncomfortable truth is that every major AI company today sits inside the same political and economic ecosystem — one deeply intertwined with governments, military contracts, and national security interests. Welcome to late-stage capitalism. And/or techno-feudalism.

Switching chatbots may change the interface. And that uneasy feeling in your gut.

It hardly changes the system.


The new AI morality war

The current moment was triggered by a controversy between Anthropic and the U.S. Department of Defense.

According to reporting and statements from Anthropic, the company pushed back on two potential uses of its AI system Claude:

  • Mass domestic surveillance.

  • Fully autonomous weapons.

You can read Anthropic’s explanation here.

Those red lines helped reinforce Anthropic’s reputation as the “ethical” AI company. But the same statement also contained a less widely shared sentence.

Anthropic said it supports:

“...all lawful uses of AI for national security aside from those two exceptions.”

That line changes the story.

It means Anthropic is not actually rejecting military use of AI.

It is negotiating the terms. It turns out, they don’t have such a big problem with mass surveillance when it’s done over people in other countries. Nor of having their AI technology used in near-autonomous weapons, to find, identify, target, and kill people deemed adversaries.

And this is where the story becomes much larger than Claude versus ChatGPT.


The infrastructure behind AI

To understand what’s really happening, we have to zoom out.

Large AI systems are not just software.

They are infrastructure — extremely expensive infrastructure, requiring:

  • Massive cloud computing;

  • Enormous datasets;

  • Billions in capital.

Which means AI companies inevitably plug into existing power structures.

The ecosystem looks roughly like this:

This ecosystem is sometimes called the emerging AI–military complex. Isn’t that a scary phrase? One that should raise eyebrows? Gives you chills? No?

Now, the story goes deeper. And this complex did not appear overnight.

Silicon Valley and the Pentagon

The relationship between the tech industry and the U.S. military goes back decades.

Here’s a fun fact we often seem to forget. The internet itself began as a DARPA project. The Defense Advanced Research Projects Agency (DARPA) is a U.S. Department of Defense (DoD) agency established in 1958, aimed at “preventing and creating strategic technological surprise.”

Many foundational technologies of modern computing — from GPS to semiconductor research without which your smartphone wouldn’t feel nearly as ‘smart’ — were funded through defense programs.

Today that relationship is entering a new phase: artificial intelligence as military infrastructure.

One example was Project Maven, a Pentagon program that used AI to analyze drone footage and identify objects and targets. The project involved Google technology development, and sparked internal protests among Google employees in 2018.

But the broader pattern never disappeared.

Military institutions are now among the largest customers for cloud infrastructure, relying heavily on platforms like Microsoft Azure, Amazon Web Services, and Google Cloud.


When a technology becomes strategically important,
governments tend to want access to it.

And when governments offer massive contracts, companies tend to accept.

 

The Palantir connection

One company sits squarely at the intersection of AI, intelligence, and military operations: Palantir.

Named after the ‘seeing stone’ from Tolkien’s Lord of The Rings - and what a pleasant idea, the all-seeing eye of Sauron in a physical artifact - Palantir received early funding from In-Q-Tel, the venture capital arm of the CIA. Its core platforms — including Gotham and Foundry — are designed to integrate large volumes of intelligence data and operational information.

These systems are widely used by:

  • U.S. intelligence agencies.

  • Military commands.

  • Allied governments (such as Israel).

Recently, Anthropic partnered with Palantir to deploy Claude in classified government environments. That partnership raised questions because Palantir software is already deeply embedded in military and intelligence workflows. 

It’s also funny to me, personally, that it is widely known that both in the attacks on Venezuela (a few days before the news of “Anthropic standing up against the DoD”) and the attack on Iran (a day or two after), Claude was used by defense operatives in the field, planning and coordinating the action. 

What all of this means? That’s up to you to decide for yourself. For me, it means Claude is not just a (very ethical) chatbot.

It can also - and already has - become an analytical layer inside existing military data platforms.


AI and modern warfare

Artificial intelligence is increasingly used in military contexts for tasks such as:

  • Intelligence analysis.

  • Battlefield simulations.

  • Logistics planning.

  • Target identification.

This means that even when AI systems are not directly controlling weapons, they can influence the chain of decisions leading to military action.

Researchers at King’s College London recently ran geopolitical simulations using large language models. In many of those simulations, the models escalated conflicts rather than resolving them — including scenarios involving nuclear threats.

The lesson here, is not that AI systems want war.

It is that they are being integrated into environments where war planning already happens. And that, when following the rules of those who play these types of games, they understand that the ‘willingness to escalate’ can make you win a conflict, regardless of the power balance or playing field. And regardless the consequences or ‘collateral damage’.

The Israel cloud controversy

Another legitimately scary, and slightly nauseating example of AI infrastructure intersecting with geopolitics is Project Nimbus.

Nimbus is a cloud computing contract between the Israeli government and two companies:

  • Google.

  • Amazon.

The contract is reportedly worth around $1.2 billion.

The project sparked internal protests from hundreds of employees at both companies, who argued the infrastructure could be used for surveillance or military operations. The UN high court stated that the use of AI in war goes against human rights. The Israeli government up to present has meanwhile killed tens of thousands of Palestinian men, women and children, and is still detaining, threatening, and mass-monitoring those who remain.

The episode highlights a much more than ‘uncomfortable reality.

AI infrastructure is not just about productivity tools.

It is increasingly part of state power and geopolitical tension and conflict.


Ethics as branding

None of this means Anthropic is uniquely unethical. I would hardly assume nor assert that.

Anthropic may genuinely be trying to define boundaries around how its technology is used.

But the broader pattern remains.

In Silicon Valley, as anywhere in business, under the constraints of capitalism, free markets, and shareholder value - ethics is still often part of competitive positioning.

Companies present themselves as the responsible alternative to their rivals.

Examples include:

  • OpenAI emphasizing alignment and safety.

  • Google publishing AI principles.

  • Anthropic promoting “constitutional AI”.

These frameworks are meaningful, I’m sure. I believe the people who worked on them and who still work on trying to uphold the values and principles within them, more often than not probably mean well. The leaving of security and ethics people from large AI companies we’ve seen happen left and right, sometimes in droves, the past few years, are a testament to that. 

But these frameworks also exist inside a system where:

  • Venture capital demands growth.

  • Governments demand strategic capability and compliance.

  • Cloud infrastructure, Video CPU, and data cost billions.

Under those conditions, ethical lines tend to move.


Ah, but what about OpenAI’s political position?

Some people will respond to all of this by pointing to OpenAI’s political positioning — especially its funding towards and growing alignment with the Trump administration, which critics describe as authoritarian and dangerously escalatory in global terms. And - just for good measure - I would not disagree with them. I merely seem to differ from others in viewing Trump and his supporters and administration as symptoms and symbols, not causes of a rotten system.

Regardless, from this perspective, I honestly do understand how switching to Claude can seem like the only morally coherent response.

But - and here’s the catch - the speed with which this debate collapses into ‘choosing between two companies’ is in itself revealing.

It suggests that the space in which we imagine ethical agency has quietly narrowed to selecting between corporate platforms — even when those platforms operate within the same cloud infrastructure, the same defense and geopolitical ecosystems, and the same economic incentives.

At that point, the question may no longer be ‘which company is more or less ethical’.

It may be why that is the only question we feel able to ask.


Put very bluntly: one tech company may help shape the political landscape that brings a government to power, while another helps build the infrastructure that government uses once it is there.

But once the same cloud providers, venture capital networks, and national security contracts underpin all of them, the difference starts to look less like resistance — and more like role distribution inside the same system.


The deeper issue

This is why the current “Claude versus ChatGPT” debate is actually slightly misleading.

It frames the question as:

“Which AI company is the good one?”

But the real question is larger.

It is about how power works in technological, economical, and geopolitical systems.

AI companies operate inside a global economic structure that rewards:

  • Scale.

  • Generating investment.

  • Return on investment.

  • Influence and power.

  • Alignment with governments.

That structure does not disappear when we switch chatbots. Sorry to burst your bubble.

But there is, I promise, some real hope.

Mirror vs Oracle

In earlier essays I wrote about two very different ways humans can relate to AI.

AI can function as a mirror - A tool that reflects our thinking, helps us explore ideas, and expands our understanding of ourselves and our development and growth trajectories.

Or it can function as an oracle - A system we begin to treat as an authority — one that answers questions, makes decisions, and gradually replaces human judgment.

Military and intelligence uses of AI push it toward the second role.

They turn AI into decision infrastructure. On a massive, grotesque, and, frankly, scary scale. Namely; a scale that influences all of our lives, and has the potential to influence all of our lives in incredibly impactful, horrible ways.

But in our personal lives we still have a choice. Some choice. 

We can treat AI as a mirror — a thinking tool.

Or we can outsource our agency to it.

And what we decide to do with that choice, will not only in a small, unique, individual way change how AI impacts our own lives; it also helps safeguard us from inadvertently contributing to the push to approach this technology from exactly the wrong angle.

Not mainly as a function of which tools we choose to use - but mainly as a function of how we approach our use of the tools, and what we use them for.

Lessons from Capoeira: using the master's tools

Sure, we could switch to the decentralized, open-sourced alternatives. Or smaller, less venture capital-backer versions. The uncomfortable reality is that most of us cannot fully avoid the more commonly used systems. At least, for now.

AI tools are becoming part of professional life. For many, they’re becoming a part of personal life as well. Intimately so, at times. 

Rejecting them entirely may not be realistic.

But we can still decide how we use them.

Capoeira is the Brazilian martial and musical art form that carries a philosophical foundation of “fighting disguised as a dance, and dancing disguised as fighting - for freedom”. In Capoeira, played for instance in the Quilombos or ‘Free cities’ founded by former slaves, the atabaque drum accompanying the game was often made from barrels taken from slave plantations.

Someone once told me that;

‘To destroy the house of the Master, you can’t use the tools the Master gave you’.
But I have come to think: sometimes, you can repurpose those tools.


Like in the case of that atabaque drum made from the barrel. The same object that once was simply a background prop for domination became part of a cultural practice of resistance and creativity.

AI tools may require a similar approach.

We can use them without surrendering our judgment to them.

We can benefit from their capabilities without pretending that the corporations behind them are moral authorities.

We can even use them to clarify our relation to those corporations, and the larger power structures connected to them. Like I have, for this very article. 

The real ethical task

Deleting ChatGPT and installing Claude may feel like an ethical choice. And if it does so, for you, by all means - go for it. 

But choosing one tool over the other, is often more symbolic than structural.


The deeper challenge is not choosing the “good” AI or tech company.

It is maintaining our own agency and sovereignty in a world increasingly shaped by algorithmic systems.


That means asking questions.

Following the money.

Understanding, incrementally more, how the infrastructure works.

And remembering that no (large) AI company — no matter how carefully branded — exists outside the incentives of power, profit, and geopolitics.

The tools may be powerful.

But they are still tools.

Whether they become helpful mirrors to our growth, or authoritative oracles, is ultimately up to us.


 


 

I help leaders, founders, creatives, seekers - and teams - move from complexity and doubt to clarity of direction and identity. Not by pushing harder, but by slowing down. By reflecting deeply. By working with storytelling and AI as conscious tools for growth and transformation.

If this story resonated with you: I’m turning my framework on conscious AI use into a short field guide; ‘AI: Mirror vs. Oracle’. DM me ‘MIRROR’ if you want early access.


“Deleting ChatGPT won’t save you.

Switching to Claude changes the interface — not the system.”