Vibe Coding is good, vibe thinking is not.

Modern development lives on a razor’s edge: on one side, incredibly powerful tools that multiply what one person can produce; on the other, the constant temptation to delegate to them not only the work, but the effort of thinking itself.

When tools amplify the brain

The excitement around vibe coding has, above all, been about getting people to accept new ways of working in the age of AI, and learning to use its leverage instead of merely enduring it. It means working in an environment where assistants reduce operational friction so you can focus the core of your mental energy on what matters most: vision, experience design, and architecture.

How the brain really learns

The brain does not learn by simply “consuming” answers. It learns by moving through three building blocks that reinforce one another: the theoretical block, where you expose your brain to new concepts through courses, documentation, and articles; the practical block, where you apply those concepts in concrete contexts such as exercises, projects, and real bugs; and the metacognitive block, where you revisit what you have done in order to analyze your mistakes, adjust your mental models, refine your intuition, and above all build and operate the automatisms formed through prior learning and repetition, so that you can free up attention for new things.

The outsourcing trap: from GPS to AI

This mechanism is not new: GPS has already actively reduced our sense of direction, while the Google effect has changed the way we manage information by making us remember where to find it rather than the information itself. With LLMs, it is no longer just memory that is being outsourced, but thinking itself: every prompt becomes an attempt to save ourselves a line of reasoning, a calculation, and by extension a piece of our intelligence.

Answer engines that immediately provide the “right answer” short-circuit this loop entirely. When you drop a problem into a black box, accept the solution, and move on, you remove the most valuable moment in the learning process. At that point, AI no longer serves thought.

Seniors vs juniors: same tool, opposite effect

What is striking is not the tool, but the way it is used. Some recent studies show that senior developers benefit enormously from AI: they use it to generate code and explore alternatives, but then spend a great deal of time reviewing, understanding, and adjusting what was produced. The tool acts as a multiplier of an already existing skill set: the senior retains a strong awareness of what they are doing, especially thanks to the learning and metacognitive work they have already done.

Juniors, on the other hand, can often do little more than copy and paste the answers they are given, without mastery of the code or its environment. Without education on how these AI systems actually work, it is difficult to do otherwise… and the illusion of competence collapses as soon as they face difficulty or complexity that the AI cannot solve.

It is, however, possible to build and customize AI systems that assist and teach effectively… but that is not what we are talking about here. We are not talking about “traditional” conversational chatbots such as ChatGPT, Gemini, Perplexity, and the like.

The Terminator scenario

We all have that scenario in mind where a superintelligence turns against us and decides to wipe us out, Terminator-style. Compared with what is happening now, I almost find that version cute.

In other words, I would rather face a machine uprising than a slow erosion of our abilities. Because that is the real risk: at scale, this produces students who no longer know how to learn, developers who are incapable of maintaining their own code, and companies that optimize for short-term output at the expense of long-term competence, creating a vicious cycle of stupidity.

Is that really the evolution we want?