Are we losing the ability to think due to LLMs?


In the age of large language models (LLMs) and generative AI, we are witnessing an unprecedented transformation in how knowledge is produced, disseminated and consumed. These tools can summarize dense texts, write code, draft legal contracts, or respond to philosophical questions in seconds.
LLMs, we are told, make us more efficient, simplify complex work, automate mundane tasks and allow us to focus on what matters. But as we marvel at their capabilities, a pressing concern emerges: Are these models genuinely boosting efficiency, or are they subtly eroding our capacity for independent thought, judgment and critical reflection?
Efficiency is not a neutral term. It reflects values, what we choose to prioritize, what we define as valuable, and what we are willing to sacrifice. The current narrative around generative AI treats efficiency as synonymous with progress. It suggests that the faster something is done, the better. But faster is not always better. And not everything that can be automated should be.
The popular belief is that LLMs "free up" cognitive bandwidth. That is, they allow humans to delegate repetitive thinking to machines and reserve their energy for more reflective tasks. But the opposite is often true. As more intellectual labor — writing, summarizing and decision-making for example — is handed over to AI, the less we will engage with it ourselves. Instead of reserving our thoughtfulness for higher tasks, we will increasingly lose the opportunities, and perhaps even the ability, to think critically.
An apt example is the increasing synthetic content online. Not only are images and text being fabricated by machines, but so too often are the public reactions to them. Content no longer spreads because it presents the truth or is relevant, but because of its emotional pull. Fake images spark fake outrage in comments, which then fuel real engagement from users who cannot distinguish between what is human and what is AI generated.
The result is a synthetic discourse loop that simulates social consensus. "Everyone is talking about it," we hear, when in fact no one is — until the content, and the reaction to it, are manufactured to serve the profit-driven strategy of platforms. Their goal is not informed conversation, but to draw continued attention, which translates into short-term revenue.
This is not just a technical challenge of detecting what's real. It's an epistemological crisis. When falsehoods are propped up by simulated reactions and amplified by algorithms optimized for attention, the notion of public discourse itself becomes unstable. Our sense of what others believe is no longer based on shared experience or deliberation, but on machine-curated illusions. In such an environment, critical thinking doesn't just decline, it is structurally discouraged.
So what do we really mean by "efficiency"? If it means shortcutting the time it takes to write a report, perhaps we have succeeded. But if it means replacing the intellectual effort that creates depth, coherence and reflection, then it's not a gain; it's a loss. The moment we accept LLMs as thought substitutes, rather than thought aids, we begin to erode the very conditions under which human reasoning thrives: questioning, dialogue, uncertainty and contradiction.
This is particularly dangerous at a time when democratic values are at stake, when critical reflection and informed disagreement are essential. The legitimacy of democratic processes relies on citizens engaging with ideas, evaluating claims, and forming judgments. But when engagement is replaced by reaction to machine-generated one-liners, that is, content crafted for manipulation rather than understanding, our political agency is undermined. We don't just risk being misled; we risk no longer knowing what it means to evaluate truth for ourselves.
There is a temptation to see LLMs as neutral tools. But they are not. They are shaped by the data they are trained on, the goals of their developers, and the market incentives that drive their deployment. Their outputs reflect a history of biases, omissions and assumptions that are often invisible to users. And the more seamlessly these outputs integrate into our workflows, the more easily they escape scrutiny. In this way, the danger is not only what the AI says, but that we stop asking how it came to say it.
To call this "efficiency" is to ignore what is actually happening: a transfer of epistemic authority from humans to machines, without the structures of accountability and transparency that should accompany such a shift. We are being asked to trust a system we cannot interrogate, on the basis that it sounds plausible and delivers quickly.
But speed is not the same as understanding. And plausibility is not truth.
Instead of fetishizing efficiency, we need to refocus on resilience: the capacity of individuals and societies to question, adapt and resist manipulation. This means investing in AI literacy — not just how to use the tools, but how to critique them. It means recognizing that no AI can replace the ethical, cultural and contextual dimensions of human reasoning. It means being willing to slow down, to question the output, and to value the effort of thinking as much as the result.
Governments, tech companies, and citizens each have a role to play. Regulation is necessary, but it is not sufficient. The foundation of responsible AI is not technical compliance; it is ethical intent. That begins with "question zero": When should AI be used? Not every problem needs an AI solution, and not every deployment leads to benefit. Responsible AI is not to put AI-first, it's to put people-first. It starts by asking why, not by rushing to deploy AI. Tech developers must embed responsibility into the very design of systems, not as an afterthought but as a guiding principle.
More important, individuals must be empowered to question AI outputs, understand their implications, and resist the normalization of passive dependence. Only by centering human judgment and agency can we ensure AI serves society, rather than reshaping it to fit commercial imperatives.
There is no turning back the presence of LLMs in our lives. But we can choose how to live with them. The question is not whether they will think for us, but whether we will let them define what it means to think at all. Efficiency, in the true sense, should not be about doing more with less thought. It should be about doing better, with deeper attention, stronger ethics and sustained human insight. Anything less is not progress. It is surrender.
The author is a professor of computer science and the director of the AI Policy Lab at Umeå University, Sweden.
The views don't necessarily reflect those of China Daily.
If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.