A few weeks ago, we opened applications for a role. The response was overwhelming. On the surface, that should have been encouraging. In reality, it was unsettling.
As I read through the submissions, a pattern became impossible to ignore. The same rhythm. The same phrasing. The same overly polished certainty. The same careful hedging. The same neat paragraphs that said a lot while saying very little. If you work closely with artificial intelligence, you can spot it instantly. AI has a smell. A cadence. A way it circles ideas instead of owning them. What I saw was not originality. It was what I now call AI vomit. Word for word responses. Predictable structures. Mechanical confidence.
To be clear, I am not anti-AI. I use it daily. I understand it deeply. AI is a tool, an assistant, a multiplier. But what I saw was not assistance. It was replacement. And that is where the danger lies.
I remembered something from my childhood. In grade five, around 1995, there was panic about scientific calculators. Teachers warned us that calculators would make children lazy. That we would forget how to think. That mental math would disappear. History proved otherwise. Calculators did not kill intelligence. They freed it. We learned faster, explored deeper problems, and focused on logic rather than arithmetic.
AI is different.
Calculators solved a narrow problem. AI touches everything. Language. Reasoning. Judgment. Creativity. Decision-making. The very muscles that define thinking itself.
As I moved through the applications, another truth became clear. Many applicants did not even understand what they were applying for. They had scanned for salary, copied the job description into a prompt, and sent whatever came out. No curiosity. No discernment. No ownership. Just output.
And yet, a few stood out. I replied to them immediately. Their writing was clear, grounded, and specific. They understood our problems. They proposed solutions that felt lived-in, not theoretical. One application in particular was flawless. Perfect structure. Insightful framing. Sharp articulation.
I scheduled a call.
I left underwhelmed.
Either the person did not write the application, or they had become so skilled at directing AI that the thinking itself was outsourced. The conversation went in circles. Surface-level answers. No depth. No instinct. No intellectual grip. That was the moment it hit me.
AI is not just changing how we work. It is changing how we think. Or more accurately, how we stop thinking.
Around the world, educators are noticing the same thing. Studies are beginning to show that overreliance on generative AI weakens memory retention and critical reasoning. When people no longer struggle with ideas, they stop forming them. Cognitive scientists call this “cognitive offloading.” When machines carry the mental load, the brain adapts by doing less.
This is not speculation. It is how the human brain has always worked. Muscles that are not used atrophy. The mind is no different.
In workplaces, the consequences are already visible. People sound smart on paper and hollow in person. They can produce proposals but cannot defend them. They can summarize strategy but cannot build one. They confuse fluency with understanding. Confidence with competence.
AI has flattened effort. Everyone now sounds impressive. But when everyone sounds impressive, substance becomes the only differentiator.
The real crisis is not that AI will take jobs. The crisis is that it will expose who never truly had skills to begin with.
There is a dangerous illusion spreading. That prompt engineering is intelligence. That knowing how to get an answer is the same as knowing the answer. That polish equals depth. It does not.
True intelligence is not output. It is judgment. It is knowing what matters, what does not, and why. It is the ability to sit with ambiguity, connect ideas, challenge assumptions, and make decisions without a script.
So what do we do?
First, we must relearn how to think without assistance. Especially when it matters. Write first, then refine with AI. Think first, then validate. Speak without prompts. Defend ideas live. Bring back interviews, conversations, case discussions, whiteboards. Force thinking into the open.
Second, we must change how we evaluate talent. Written applications alone are no longer reliable. The future belongs to those who can explain, adapt, and reason in real time. Those who can say “I do not know” and then work their way forward.
Third, we must teach AI literacy, not AI dependence. Young people should learn how AI works, where it fails, and when not to use it. The most powerful professionals will not be those who use AI the most, but those who know when to ignore it.
Finally, we must protect boredom, struggle, and slowness. The discomfort of thinking is not a flaw. It is the process. Innovation has always come from friction. From sitting with problems long enough for insight to emerge.
AI is here to stay. It will get better. Faster. More convincing. But the future will not belong to those who outsource their minds. It will belong to those who use AI as a lever, not a crutch.
The sleepy brain is the real risk. And waking it up is now a personal responsibility. In a world where intelligence is automated, will we still choose to think?

Nice