AI as Thinking Partner — or Silent Authority?

AI as Thinking Partner — or Silent Authority?

Are we using AI as a Mirror, or more and more as an Oracle?

I’m gonna go out on a limb and make the following prediction about you, like a medium or one of those annoyingly accurate mentalists would:

At some point in the last few months, you probably asked an AI a question that wasn’t about work.

Not an email draft.
Not a spreadsheet formula.

Something else.

A relationship question. A career or business direction doubt. A “what should I do with my life” moment at 11:47 p.m. with the lights off and the ceiling looking suspiciously philosophical.

This isn’t necessarily strange. Research is indicating that more and more people are doing this everyday. It’s not irresponsible - not necessarily. In many ways, it’s intelligent.

For the first time in history, most people now carry a conversational thinking partner in their pocket — one that answers instantly, never gets tired, and doesn’t roll its eyes when you ask the same question three different ways.

The shift that I’m talking about is not that we’re using AI.
The shift is where we’re using it, and for exactly what level of purpose.

And that shift is subtle enough that most of us haven’t fully named it yet.

The mirror that talks back

Humans have always used mirrors.

A still pond. Other humans. Actual glass mirrors. To then writing ourselves on clay tablets, and paper, to reflect on ourselves. To books. To cameras, photographs, TV, film, and the magical black mirror in our pocket - tapped into so many ways to reflect ourselves back it’s actually still a little frightening to think about. 

It makes sense that a being that understands themselves as a separate self, would want to use a mirror. That it would look into the mirror and see what it can tell it about itself. 

And so, we do. We stand in front of the mirror to check posture, adjust clothing, and to notice what we missed. But, and this is a very important distinction: A mirror doesn’t decide anything for you. It can only reflect.

 

What we didn’t grow up with…was a mirror that answers back.

A mirror that summarizes.
Suggests.
Frames.
Rephrases your own uncertainty in language so clean it almost feels wiser than the thought that produced it.


There’s a big difference between asking a mirror what you look like –
And asking it who you should become.


Most of us haven’t crossed that line deliberately.
We sort of… drifted near it. And sometimes, for some of us, edged a little too close to the wrong side of that line, in moments. I know I have.


The quiet transfer

The real risk with AI isn’t misuse, per se.
It isn’t that people will ask silly questions or get wrong facts.
Nor is it that eventually, all of us will drift into AI psychosis. 

The deeper risk is quieter:

Unnoticed, gradual delegation of inner authority.

 

Not a dramatic surrender.
Not a conscious decision.

More like a slow, creeping, un-conscious hand-off.

You start by asking for phrasing help.
Then framing help.
Then perspective.
Then guidance, and then… direction.

It doesn’t feel like giving something away.
It feels like gaining clarity.

And sometimes, you are.

But clarity is not the same thing as authorship. And it doesn’t hold the same weight.

 

Authority drift: two examples — same pattern

What we’re talking about here, can be referred to as ‘authority drift’ or ‘authority creep’. In other words; the gradual, sliding transfer of authority from one party to the other. In this context: the slow, subtle, and largely subconscious drifting of authority from you, the human, towards the AI that you’re using in the context of narrative sensemaking.

To make this more tangible, let me give you two different examples of the same pattern:

 

The extreme version of authority drift
Looks something like this:

Someone begins asking an AI how to navigate major life decisions — where to live, whether to leave a partner, whether to change careers. They find comfort, they feel validated, and they feel there’s a lot of resonance and clarity in the answers and in the conversations they’re getting.

Slowly, they stop consulting friends, mentors, or their own embodied intuition. The AI becomes the most consistent voice in their decision loop. Not because it demanded authority… but because it was always there. Never criticizing, always validating and friendly.

They end up in a state somewhere between AI-dependent loneliness with a slightly uncomfortable feeling of disconnection - and full-on AI psychosis, leaving their family, perhaps un-aliving themselves or proclaiming themselves to be the second coming of The Christ. We’ve all heard the stories.

This effect requires no conspiracy.
No villain.
Just convenience, multiplied.

 

The more mundane version of authority drift
Can be almost invisible:

One fine day, you decide to ask AI to help rewrite your LinkedIn summary. It sounds better than what you wrote. You post it. You get great responses.

Then you use it to refine your bio.
Then your pitch. Your new website’s homepage.
Then your “about” page.

Six months later, you read your own words and think, “Yeah… that sounds like me.” But if you’re honest, you’re not entirely sure it is you. It’s actually a slightly “optimized” version. A little smoother. A little less unique. A little more decisive, perhaps. And a little less… humanly uncertain.

Now, here comes the kicker:

Multiply that progressive effect by big decisions, in domains and moments related to relationships, leadership, and identity over a few years, and you don’t get catastrophe.

You get identity and authorship drift.

And drift is far harder to notice than disaster.


The seduction of narrative coherence and fluency

AI is extraordinarily good at one thing: making language sound coherent. And that’s a large part of the reason both its use and its outputs can be so seductive for us.

Humans often mistake coherence for truth, and fluency for wisdom.
We always have. It’s part of the reason the debates in our society, even today, often sound and feel like they do. Stupid ideas that sound great, are often more compelling than brilliant ideas, awkwardly phrased. 

Five years ago, we were worried AI would take our jobs.
Turns out we didn’t need to worry about our jobs as much as the complete direction, compass, and strategy for how to manage our existential doubts and the entire course of our lives. 

Who would have thunk it?

 

That’s not necessarily dystopian.
It’s just… ironic.

And worth noticing.

 

Mirror vs Oracle: how to tell the difference

So. Are we using AI more as a thinking partner, and a mirror - or more as an oracle? This is a very important distinction that I feel many of us could benefit from adopting and integrating into the way we think about how and where and why we use AI companions as thinking partners.

In my view, there are two broad ways to use systems like this.

 

Mirror Mode:
You think with AI.

You use it to surface patterns, contradictions, and blind spots.
AI reflects; you decide - on the direction of the thought exercise, the way it evolves, and the result of it.

 

Oracle Mode:
You think through AI.

You ask whatever you literally ask it; “Should I take that job offer?”; “How do we reposition our brand?”; “Why does my partner sometimes behave like this?” – but creepingly and under the surface you end up effectively asking it what to do, what to believe, even who to become.

It answers; you follow.

Most people move between these modes unconsciously. Which makes sense, given that the line isn’t fixed. It’s a posture. A moving one, at that. It’s a conscious use of tools. And a mindset.

 

The issue isn’t that ‘Oracle Mode’ exists.
Sometimes external guidance is useful.
The issue is slipping into Oracle Mode – without realizing you did.


Why this matters more than it seems

This isn’t about productivity.
Or technology.
Or even accuracy.

It’s about authorship.

Who gets the final word in the story you’re telling about your life, your work, your identity?


A tool that reflects your thinking can sharpen it.
A tool that creepingly starts to replace your thinking -
Slowly dulls the muscle that produces it.

 

The difference rarely announces itself loudly.
It shows up as subtle erosion:

  • Slightly less internal debate.

  • Slightly less thinking without your favorite thinking tool.

  • Slightly faster decisions.

  • Slightly more externally oriented phrasing, and less inner voice.

Nothing alarming in isolation. But very meaningful in accumulation.


The part that should always remain yours

None of this means you shouldn’t use AI to think.
Please don’t get me wrong. What I’m advocating is quite the opposite.

A mirror can be one of the most powerful tools a person has — as long as they remember it’s a mirror. Technology in and of itself doesn’t take authorship away. People hand it over when they stop noticing they’re doing so. When they allow their use of the mirror to drift into using it as an oracle. What this means, is that the opportunity right now isn’t to step back from these tools.
It’s to step into them more consciously.

Ask your questions.
Use the reflection.
Let it sharpen your thinking, your feeling, your language; let it expose your blind spots.

Just remember:

 

A mirror can help you see more clearly.
But it should never be the one deciding where you go. Or who you’re showing up there as.