EDITORIAL: Are We Still Thinking

Date published
June 8, 2025

Are We Still Thinking

The tools we build often promise freedom. From effort, from repetition, from error. But what if that freedom comes at a price we do not see right away. What if we are not just using these tools, but slowly being shaped by them.

Artificial intelligence is becoming a daily companion. It helps write essays. It organizes data. It drafts messages, summarizes reports, even generates ideas. It saves time. It feels like progress. But something else may be happening too.

The skill we stop using is the skill we start losing

Consider maps. Once, navigating a city meant memory, awareness, and a feel for direction. Now, most people follow a moving blue dot. A study in twenty twenty showed that regular use of GPS weakens spatial memory. Users did not notice the decline. But it happened anyway.

The pattern is familiar. Calculators reduced the need for arithmetic. Autocorrect blurred the rules of spelling. Now, AI is doing more than correcting. It is thinking for us. Or at least, it appears to be.

image

When tools do the work, we forget how to work

Professor David Rafo saw it firsthand. Before the pandemic, his students struggled with writing. Then, during lockdown, their work became polished. Too polished. The students admitted it. AI had helped. It had improved the writing, but not the writers.

Rafo did not reject AI. He simply saw what it replaced. Writing is not just typing. It is thinking clearly, structuring ideas, struggling with meaning. These are cognitive muscles. Muscles that weaken when we let machines flex for us.

This is not an alarm. It is a mirror

AI is not malicious. It is not trying to weaken us. But its convenience is powerful. And that power creates a temptation. A slow and silent shift toward dependence. Not forced. Chosen. Repeated. Reinforced.

The term for this is cognitive offloading. Using external tools to reduce mental effort. It is not new. But the scale is. And the stakes are growing.

In law enforcement, AI systems have been used to match faces with surveillance footage. In Detroit, one pregnant woman was wrongly arrested based on a faulty match. The software was trusted. The human judgment was passive. The error was costly. And not rare.

The more we trust the system, the less we question the system

On social media, people ask AI to explain jokes. To translate opinions. To summarize simple points. Not because they cannot understand. But because they no longer try. The act of curiosity has been replaced by the shortcut of delegation.

Alec Watson calls this algorithmic complacency. The habit of letting programs shape your experience. Once, people searched deliberately. They bookmarked. They chose. Now, they scroll. The feed decides what matters. Not the user.

And the result is subtle. But it is everywhere. A passive drift from thought to automation. From creation to consumption. From agency to pattern.

The systems are fast. But are they wise

There is no doubt that AI improves productivity. But that is not the only measure of value. There is something deeper at risk. Not just memory. Not just analysis. But awareness itself.

Some researchers point to model collapse. When AI models are trained on other AI content, the result degrades. Information loops back on itself. Meaning thins. The machine begins to eat its own output. After a few cycles, what remains is noise.

And if the internet fills with synthetic knowledge, how will we know what is real. What happens when the source cannot be trusted because the source is a copy of a copy.

This is not fiction. It is now

Studies suggest that more than half of online content may already be AI generated or AI translated. The dead internet theory argues that most digital interaction is now between bots. If that is true, then many of the signals we see are not coming from people at all.

The implications are not just technical. They are existential.

So what can we do

The answer is not rejection. AI is not the enemy. It is a tool. Like the spreadsheet once was. Like the calculator. But unlike those tools, it imitates human thought. It mimics judgment. That illusion of intelligence is what makes it powerful. And what makes it dangerous.

Professor Thomas Diettrich put it clearly. Language models are not knowledge bases. They only simulate knowledge. That distinction matters. Because when the model always answers, we forget to ask whether the answer deserves our trust.

The risk is not that AI becomes conscious. The risk is that we stop being conscious.

The challenge is not to unplug. It is to remember

Remember how to question. Remember how to wonder. Remember that effort is not a flaw. It is where learning happens. And meaning. And growth.

We can use these tools without surrendering to them. But that takes discipline. It takes a willingness to pause. To reflect. To test what the machine says against what we know. And if we no longer know, to be honest about that.

image

In the end, thinking is what makes us human

Not automation. Not efficiency. Thought. Intention. Reflection.

We do not need to fear the tools we build. But we must remember to use them with care. Because once we forget how to think, the machine will not remind us.

As Descartes once said, I think, therefore I am.

That remains true. As long as we still choose to think.