AI is taking over more and more of our tasks. Are our cognitive abilities suffering as a result, and does this even increase the risk of dementia?
Artificial intelligence has finally arrived in our everyday lives. According to a survey by the industry association Bitkom, in 2025, 67 percent of Germans used applications like ChatGPT, Microsoft Copilot, or Google Gemini at least occasionally to write texts, generate images, or create programming code. But while algorithms make thinking and working easier for us, there is growing concern that outsourcing our mental work to AI is causing our brains to atrophy. Does using ChatGPT and the like actually increase our risk of dementia in the end?
The current state of science shows: Artificial intelligence is a double-edged sword for the human brain. It doesn't matter whether we use it, but how we do so.
"Digital Dementia" and Cognitive Atrophy
The fear of mental decline due to technology is not new. As early as 2012, brain researcher Prof. Dr. Manfred Spitzer coined the term Digital Dementia with his book of the same name. His thesis: If we no longer have to remember anything and outsource everything to digital devices, our brain degrades. After all, the law of neuroplasticity applies in neuroscience: Use it or lose it.
While Spitzer's theses received some sharp criticism (German) in the academic world because critics often evaluated them as too alarmist or methodologically insufficiently differentiated, the rapid development of generative AI has given the basic idea a completely new relevance.
With generative AI, this warning has reached a new dimension. We are no longer outsourcing just pure factual knowledge, but also complex thinking and analysis processes. An MIT study titled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" provides corresponding data. The study divided 54 participants aged 18 to 39 into three different groups who were asked to write essays under specific conditions: one group worked completely without aids ("Brain-only"), the second group was allowed to use a conventional search engine, and the third group used ChatGPT.
The results indicate fundamental changes in information processing:
- Massively reduced neuronal connectivity: The EEG recordings clearly showed that global brain connectivity in AI users was reduced by almost half compared to the "Brain-only" group. The alpha and theta wave frequency bands were particularly affected. Alpha waves are strongly linked to attention control and the suppression of irrelevant information, while theta waves are significant for memory consolidation and cognitive control. The "Brain-only" participants showed the strongest and most widely distributed neuronal networks in the brain. This suggests an intensive, holistic involvement of the brain. Search engine users showed moderate connectivity because they had to search for, evaluate, and assemble information. The AI users, on the other hand, showed by far the weakest neuronal connectivity. Cognitive brain activity therefore scaled directly negatively with the use of the external tool.
- Reduction of relevant cognitive load: A central selling point for AI is the increase in productivity. In fact, the participants supported by ChatGPT wrote their texts on average 60 percent faster than the control groups. However, this efficiency gain came at a high price: The "relevant cognitive load" – that essential mental effort which is neurobiologically necessary to transform pure information into real, retrievable knowledge through deep processing – dropped by 32 percent in AI users. The brain was put into a state of pure monitoring instead of actively constructing.
- Dramatic memory loss: The lack of deep engagement with the material led to massive losses in the procedural and declarative memory of the test subjects. Shortly after the task, 83 percent of the AI users were unable to remember a passage they had just written or copied for their own essay. These results support the thesis of digital amnesia: What the brain has not laboriously formulated itself, it does not store.
- Loss of sense of ownership and increasing inertia: In surveys, the LLM users reported by far the lowest feeling of authorship (ownership) for their texts and had great difficulties citing correctly from their own, allegedly self-written works. To make matters worse, this condition even deteriorated. Over the course of the four-month study, the AI users became increasingly cognitively sluggish with each subsequent essay and, towards the end of the investigation period, often relied only on simple copy-and-paste processes without significantly editing the text.
To test the duration of these effects, the researchers conducted a fourth session in which the conditions were swapped. AI users suddenly had to write without aids ("LLM-to-Brain"). These participants showed significantly reduced alpha and beta connectivity, signs of acute under-challenge of the brain, and considerable initial difficulties in activating their own cognitive resources. Participants who had previously worked without aids and were now allowed to use AI, on the other hand, continued to show higher memory performance and stronger activation of certain brain areas.
The MIT researchers conclude from this data that the continuous outsourcing of mental effort to an AI leads to a cumulative "cognitive debt". The further automation progresses and the more often complex tasks are outsourced to LLMs, the less the prefrontal cortex is used. This can be evidence of long-lasting plastic effects that go far beyond the immediate completion of the task and accustom the brain to a low operating frequency in the long run.
Cognitive psychologists warn in this context of a "Memory Paradox": Because we rely on smart tools, we shy away from mental effort. But precisely this effort, namely the arduous struggle for a solution, making and correcting mistakes, builds those robust neuronal networks that we need for true critical thinking and intuition. Anyone who shortcuts this phase with AI is not building up a cognitive reserve for old age.
Also LLMs can face kind of dementia
The blind trust in AI takes on a certain irony when one observes that LLMs can also be affected by a kind of dementia: Researchers had AI models like GPT-4o or Claude Sonnet 3.5 take the MoCA test, which is used in clinics for the early detection of dementia. The result: Almost all chatbots showed such gaps in areas like attention, memory, and executive functions that, as a human, one would attest them a "mild cognitive impairment". We are therefore outsourcing our thinking to systems that are themselves cognitively unreliable.
How Technology Actually Protects the Brain
Do we now have to turn off all computers to stay mentally fit? Definitely not. Because looking at broad-based epidemiological data shows the opposite picture.
In a study by Baylor University and the Dell Medical School, data from over 400,000 people over the age of 50 were examined. The amazing result: The active use of computers, smartphones, and the Internet did not increase the risk of dementia, but was associated with a 58 percent lower risk of cognitive impairment.
How can this contradiction be explained? The researchers attribute this to two central mechanisms:
Technological Reserve: Technology is exhausting. Learning new software, navigating through updates, and solving computer problems challenges the older brain massively. This active engagement acts like lifelong brain jogging and builds resilience.
Digital Scaffolding: Digital tools help older people master their everyday lives independently – from calendar reminders to video calls with family. This reduces chronic stress and protects against dementia.
AI as Active Brain Jogging?
Artificial intelligence and algorithms can even be used specifically for dementia prevention if they function not as a substitute for thinking, but as a training partner.
The best proof is provided by a study from Johns Hopkins University. Here, seniors completed computer-aided training in visual processing speed. The highlight: The program was adaptive. Like a good AI, it adjusted the difficulty level in real-time to the daily form of the participants and always kept the brain at its absolute performance limit. The result was historic: Even 20 years after this short training, these participants had a 25 percent reduced risk of developing dementia compared to the untrained control group.
Conclusion: It Depends on the "How"
Does AI increase the risk of dementia? Science says: It is in our hands. There is simply no neurobiological law that states that code, machine learning, and algorithms are per se harmful to human brain tissue. It must not be forgotten that the broad use of generative AI has only been taking place for a few years and for this reason there can be no long-term studies yet.
The research literature forces us instead to consciously separate between the active cognitive challenge through technology and the passive cognitive outsourcing to technology.
If we use generative AI as an intellectual shortcut, as a comfortable crutch to spare ourselves independent writing, thinking, and problem-solving, we run the risk that our mental abilities will atrophy. Neuronal measurements clearly show that our brain reacts to passive consumption and outsourced effort with degradation. Especially in young people whose brains are still developing, this can have negative consequences for brain development.
However, if we use technology and AI as a constant challenge to educate ourselves further, as an adaptive training tool for our mind, and as a bridge to stay socially connected, it becomes one of our strongest shields against cognitive decline in old age.
Your Maintenance Expert in data centers
Through decades of experience, we know what matters when maintaining your data center hardware. Benefit not only from our experience but also from our excellent prices. Get a non-binding quote and compare for yourself.
More Articles
Dell PowerEdge R7725 with AMD Processors: Application and Features
The Dell PowerEdge R7725 with AMD processors is a high-performance rack server, positioned as the top model in Dell's
BW Velora: New Company to Build Sustainable Data Centers
BW Velora is a new player in the data center infrastructure market. Founded in the Norwegian capital of Oslo,
Dell: High Demand for AI Servers Leads to Strong Sales Figures
Dell is currently experiencing a sharp increase in demand for servers, driven particularly by the growing needs in the
Skip to content



