The Brain’s Gym: Why AI is the Ultimate Cheat Code That Makes You Lose the Game

Author : Syeda Humra Gilani

The Brain’s Gym: Why AI is the Ultimate Cheat Code That Makes You Lose the Game

Program Studi Magister Psikologi, Fakultas Psikologi, Universitas Padjadjaran 

Imagine you’re at a social gathering, and someone asks you a question about an article you “wrote” with the heavy assistance of an AI. You open your mouth to explain a concept from that article… but nothing comes out. The concept is in the “article” you wrote, but it’s not in your “mind”.

We are witnessing a strange new phenomenon.  We are writing great reports,  articles and essays in human history,  yet we are turning into the least articulate versions of ourselves.  Welcome to the era of Cognitive Offloading, where our brain’s muscles are beginning to stiffen from lack of exercise and our “ghost in the machine” is handling the cognitive tasks. While offloading cognitive load on AI can increase efficiency and free cognitive resources for complex problem-solving, its risks are equally detrimental such as eroding critical thinking, memory, and independent reasoning if overused. 

The Performance Paradox: Looking Smart vs. Being Smart

The notion of Performance Paradox is frequently cited by educational psychologists. It is the ability to produce high-quality output while simultaneously experiencing decline in actual cognitive abilities. This occurs when a student uses generative AI to produce an A+ paper but fails to fulfill the  learning objectives of the assignment. 

According to research by Lodge et al. (2023), we bypass or miss the crucial process that produces deep learning when we outsource the “formulation” stage of writing –  the struggling part where we battle to find the appropriate terms. We get the high-grade “performance,” but the “learning” never actually happened in its true form. This contradiction is a difference between temporary performance and permanent learning.

In educational psychology, “performance” refers to the observable output during or immediately after a task, while “learning” refers to the long-term, stable change in knowledge or capability. When AI is used to produce a sophisticated argument, the performance is visible, but because all the process to produce that argument was mainly handled by an AI algorithm, the student’s brain did not undergo the structural changes necessary for “learning. Our long-term ability has been exchanged for a short-term outcome. 

Additionally, the performance paradox creates a feedback loop that devalues the learning process. When the grading system expects and rewards perfect products, students are incentivized to prioritize speed rather than going through a process of struggle and cognitive growth. This dependence may gradually undermine students’ confidence in their own abilities, causing them to question whether they can produce high-quality work without AI assistance. As a result, their intellectual growth may become limited by the complexity of the prompts they are able to create.

Read more: Are You A High Level Thinker Or Just Scratching The Surface? 

The Psycholinguistic “Struggle”: Why Friction is Your Friend

To understand why an instant AI response is a problem rather than struggling to think, we have to look at how our brains process language. Psycholinguists talk about the Mental Lexicon—our brain’s internal, hyper-connected dictionary.

When you have trouble recalling the word “serendipity,” your brain is working hard. It is strengthening the link between that idea and the word by sending impulses across neural circuits. Robert and Elizabeth Bjork  referred to this as “Desirable Difficulties”. According to them, in order for learning to be “durable,” it must be somewhat difficult. Their research unveiled a counterintuitive truth: the easier a task feels, the less you are actually learning.

Generative AI is a “Processing fluency tool” and it removes the mental effort. However, Bjork differentiated between “storage strength” (how well something is memorized) and “retrieval strength” (how easy it is to recall information when needed). AI creates instant effortless retrieval strength, the response from AI appears immediately on screen, giving a fake sense of accomplishment. Since there was no cognitive effort involved in the information produced with the help of AI, the storage strength is nearly zero. In educational terms, this is “shallow processing.” The brain treats the AI-generated text as disposable data rather than integrated knowledge. 

When an AI chatbot finds the perfect word for you even before  you’ve even concluded your thinking, it’s like a gym helper lifting weights for you. It feels great  but your actual muscles aren’t growing. When you are thinking hard to write a report, you are trying to connect your ideas to what you already know. But when you use AI, this linkage is created based on a statistical model rather than your personal experience.

By removing the “effort” of thinking, AI might be causing Lexical Atrophy—a shrinking of our active vocabulary and declining of  our internal ability to construct complex arguments. As a result, you might “know” what a report says, but you have not “integrated” that information into your own mind.

Additionally, this processing fluency and lack of mental effort impacts “transfer of learning”. Students who depend heavily on AI for writing assignments often find it hard to apply concepts they “wrote” about to new unrelated problems. Because the learning and writing was instantly prompted by AI, rather than their own cognitive effort, the knowledge becomes “fragile”. It exists only as long as the prompt is active, leaving the student intellectually stranded when they are required to think independently in real-time environments, such as live debates or workplace problem-solving. 

Read more: How To Improve Spiritual Health For A More Meaningful Life

The “Metacognitive Laziness” Trap

Few contemporary studies have explored how  generating information through AI creates an Illusion of Explanatory Depth; a cognitive bias where people believe they understand complex mechanisms or concepts much better than they actually do. Because the answer is so easily accessible at our fingertips, our brains experience a sense of “fluency,” and trick us into thinking the knowledge is actually stored in our long term memory. 

This illusion compromises metacognitive monitoring, which is the self-reflective process of assessing how well a task is going, are we confused or have we mastered a topic. Normally, the struggle of writing reveals the gaps in our understanding; if you can’t explain it, you don’t know it. AI disguises these gaps and provides a final product that looks so perfect that we stop asking the “why” behind the logic and mistakenly believe that we are fluent or competent enough to explain that complex topic.

Consequently, this illusion of competence leads to Metacognitive Laziness, where we assume that we have mastered a concept just because we were able to prompt AI to explain it. We stop reviewing our own work. We no longer challenge the reasoning. Our lives become “passive editors” instead of “active authors.”  This results in a generation of learners who are “confident but hollow”, who are capable of making professional-looking results but unable to explain the underlying principles. This is risky in the context of digital media. If we are unable to analyze the language we “write,” we lose the ability to defend our beliefs and ideas.

Read more: How To Stop Being Judgmental? 5 Ways to Undo a Critical Mindset

The Ghost in the Machine: Can We Co-Exist? 

So, are we destined to become a species that can converse in bullet points and “professional” tones created by artificial intelligence? Not necessarily… The goal is to alter how we interact with AI, not outlaw it. We must shift our pedagogical priority from Cognitive Offloading (assigning the task) to Cognitive Augmentation (using the technology to extend the brain).

  • The “Human-in-the-Loop” Rule: Use AI for the ideation, Students should be taught to ask questions like “Challenge my argument,” or “Explain why my second paragraph is weak,” rather than asking AI to “write this.” This keeps people in the “cognitive loop” by compelling them to return to the Conceptualization and Evaluation stages. 
  • Embrace the Imperfectness : Students must perform the formulation stage manually. They should write a “messy” first draft to fire their own neural pathways before using AI to refine the results. Those messy imperfections are the evidence that the brain is actually activated.

There is more to language than just information from one place to another. It is the very tool we use to think. If we outsource the language, we risk outsourcing the thought. Remember that the “struggle” isn’t a glitch in the system the next time you’re tempted to let an AI to “polish” your thoughts until they are unrecognizable. The system is the struggle. Don’t let Artificial Intelligence have the final say, instead keep your mind active and your mental lexicon chaotic.

References

Bjork, Elizabeth & Bjork, Robert. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society. 56-64. 

Lodge, J. M., et al. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Frontiers in Psychology;16:1550621. Doi: 10.3389/fpsyg.2025.1550621


Published On:

Last updated on:

Disclaimer: The informational content on The Minds Journal have been created and reviewed by qualified mental health professionals. They are intended solely for educational and self-awareness purposes and should not be used as a substitute for professional medical advice, diagnosis, or treatment. If you are experiencing emotional distress or have concerns about your mental health, please seek help from a licensed mental health professional or healthcare provider.

Leave a Comment

Today's Horoscope

Daily Horoscope 5 May 2026: Prediction for Zodiac Signs

Daily Horoscope 5 May, 2026: Prediction For Each Zodiac Sign

It’s time to act, not overthink everything.

Latest Quizzes

What Is My Core Wound Quiz? 10 Signs That Expose It!

What You Notice First in Someone Isn’t Random—Take This Quiz to Reveal Your Core Wound

What you notice first in others may quietly mirror your own needs that go unseen and unmet. Take this simple quiz to help you uncover the core wounds hidden beneath that actually shape you.

Latest Quotes

Human Psychology Facts: How These Cognitive Biases Quietly Shape Your Life

Human Psychology Facts: How These Cognitive Biases Quietly Shape Your Life

Human psychology facts show how your brain quietly edits reality. From the Pratfall Effect to the Halo Effect, these cognitive biases influence who you like, what you notice, and how confident you feel.

Readers Blog

Caption This Image and Selected Wisepicks – 3 May 2026

Caption This Image and Selected Wisepicks – 3 May 2026

Ready to unleash your inner wordsmith? ✨??☺️ Now’s your chance to show off your wit, charm, or sheer genius in just one line! Whether it’s laugh-out-loud funny or surprisingly deep, we want to hear it.Submit your funniest, wittiest, or most thought-provoking caption in the comments. We’ll pick 15+ winners to be featured on our website…

Latest Articles

The Brain’s Gym: Why AI is the Ultimate Cheat Code That Makes You Lose the Game

Program Studi Magister Psikologi, Fakultas Psikologi, Universitas Padjadjaran 

Imagine you’re at a social gathering, and someone asks you a question about an article you “wrote” with the heavy assistance of an AI. You open your mouth to explain a concept from that article… but nothing comes out. The concept is in the “article” you wrote, but it’s not in your “mind”.

We are witnessing a strange new phenomenon.  We are writing great reports,  articles and essays in human history,  yet we are turning into the least articulate versions of ourselves.  Welcome to the era of Cognitive Offloading, where our brain’s muscles are beginning to stiffen from lack of exercise and our “ghost in the machine” is handling the cognitive tasks. While offloading cognitive load on AI can increase efficiency and free cognitive resources for complex problem-solving, its risks are equally detrimental such as eroding critical thinking, memory, and independent reasoning if overused. 

The Performance Paradox: Looking Smart vs. Being Smart

The notion of Performance Paradox is frequently cited by educational psychologists. It is the ability to produce high-quality output while simultaneously experiencing decline in actual cognitive abilities. This occurs when a student uses generative AI to produce an A+ paper but fails to fulfill the  learning objectives of the assignment. 

According to research by Lodge et al. (2023), we bypass or miss the crucial process that produces deep learning when we outsource the “formulation” stage of writing –  the struggling part where we battle to find the appropriate terms. We get the high-grade “performance,” but the “learning” never actually happened in its true form. This contradiction is a difference between temporary performance and permanent learning.

In educational psychology, “performance” refers to the observable output during or immediately after a task, while “learning” refers to the long-term, stable change in knowledge or capability. When AI is used to produce a sophisticated argument, the performance is visible, but because all the process to produce that argument was mainly handled by an AI algorithm, the student’s brain did not undergo the structural changes necessary for “learning. Our long-term ability has been exchanged for a short-term outcome. 

Additionally, the performance paradox creates a feedback loop that devalues the learning process. When the grading system expects and rewards perfect products, students are incentivized to prioritize speed rather than going through a process of struggle and cognitive growth. This dependence may gradually undermine students’ confidence in their own abilities, causing them to question whether they can produce high-quality work without AI assistance. As a result, their intellectual growth may become limited by the complexity of the prompts they are able to create.

Read more: Are You A High Level Thinker Or Just Scratching The Surface? 

The Psycholinguistic “Struggle”: Why Friction is Your Friend

To understand why an instant AI response is a problem rather than struggling to think, we have to look at how our brains process language. Psycholinguists talk about the Mental Lexicon—our brain’s internal, hyper-connected dictionary.

When you have trouble recalling the word “serendipity,” your brain is working hard. It is strengthening the link between that idea and the word by sending impulses across neural circuits. Robert and Elizabeth Bjork  referred to this as “Desirable Difficulties”. According to them, in order for learning to be “durable,” it must be somewhat difficult. Their research unveiled a counterintuitive truth: the easier a task feels, the less you are actually learning.

Generative AI is a “Processing fluency tool” and it removes the mental effort. However, Bjork differentiated between “storage strength” (how well something is memorized) and “retrieval strength” (how easy it is to recall information when needed). AI creates instant effortless retrieval strength, the response from AI appears immediately on screen, giving a fake sense of accomplishment. Since there was no cognitive effort involved in the information produced with the help of AI, the storage strength is nearly zero. In educational terms, this is “shallow processing.” The brain treats the AI-generated text as disposable data rather than integrated knowledge. 

When an AI chatbot finds the perfect word for you even before  you’ve even concluded your thinking, it’s like a gym helper lifting weights for you. It feels great  but your actual muscles aren’t growing. When you are thinking hard to write a report, you are trying to connect your ideas to what you already know. But when you use AI, this linkage is created based on a statistical model rather than your personal experience.

By removing the “effort” of thinking, AI might be causing Lexical Atrophy—a shrinking of our active vocabulary and declining of  our internal ability to construct complex arguments. As a result, you might “know” what a report says, but you have not “integrated” that information into your own mind.

Additionally, this processing fluency and lack of mental effort impacts “transfer of learning”. Students who depend heavily on AI for writing assignments often find it hard to apply concepts they “wrote” about to new unrelated problems. Because the learning and writing was instantly prompted by AI, rather than their own cognitive effort, the knowledge becomes “fragile”. It exists only as long as the prompt is active, leaving the student intellectually stranded when they are required to think independently in real-time environments, such as live debates or workplace problem-solving. 

Read more: How To Improve Spiritual Health For A More Meaningful Life

The “Metacognitive Laziness” Trap

Few contemporary studies have explored how  generating information through AI creates an Illusion of Explanatory Depth; a cognitive bias where people believe they understand complex mechanisms or concepts much better than they actually do. Because the answer is so easily accessible at our fingertips, our brains experience a sense of “fluency,” and trick us into thinking the knowledge is actually stored in our long term memory. 

This illusion compromises metacognitive monitoring, which is the self-reflective process of assessing how well a task is going, are we confused or have we mastered a topic. Normally, the struggle of writing reveals the gaps in our understanding; if you can’t explain it, you don’t know it. AI disguises these gaps and provides a final product that looks so perfect that we stop asking the “why” behind the logic and mistakenly believe that we are fluent or competent enough to explain that complex topic.

Consequently, this illusion of competence leads to Metacognitive Laziness, where we assume that we have mastered a concept just because we were able to prompt AI to explain it. We stop reviewing our own work. We no longer challenge the reasoning. Our lives become “passive editors” instead of “active authors.”  This results in a generation of learners who are “confident but hollow”, who are capable of making professional-looking results but unable to explain the underlying principles. This is risky in the context of digital media. If we are unable to analyze the language we “write,” we lose the ability to defend our beliefs and ideas.

Read more: How To Stop Being Judgmental? 5 Ways to Undo a Critical Mindset

The Ghost in the Machine: Can We Co-Exist? 

So, are we destined to become a species that can converse in bullet points and “professional” tones created by artificial intelligence? Not necessarily… The goal is to alter how we interact with AI, not outlaw it. We must shift our pedagogical priority from Cognitive Offloading (assigning the task) to Cognitive Augmentation (using the technology to extend the brain).

  • The “Human-in-the-Loop” Rule: Use AI for the ideation, Students should be taught to ask questions like “Challenge my argument,” or “Explain why my second paragraph is weak,” rather than asking AI to “write this.” This keeps people in the “cognitive loop” by compelling them to return to the Conceptualization and Evaluation stages. 
  • Embrace the Imperfectness : Students must perform the formulation stage manually. They should write a “messy” first draft to fire their own neural pathways before using AI to refine the results. Those messy imperfections are the evidence that the brain is actually activated.

There is more to language than just information from one place to another. It is the very tool we use to think. If we outsource the language, we risk outsourcing the thought. Remember that the “struggle” isn’t a glitch in the system the next time you’re tempted to let an AI to “polish” your thoughts until they are unrecognizable. The system is the struggle. Don’t let Artificial Intelligence have the final say, instead keep your mind active and your mental lexicon chaotic.

References

Bjork, Elizabeth & Bjork, Robert. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society. 56-64. 

Lodge, J. M., et al. (2025). The cognitive paradox of AI in education: between enhancement and erosion. Frontiers in Psychology;16:1550621. Doi: 10.3389/fpsyg.2025.1550621


Published On:

Last updated on:

Syeda Humra Gilani

Leave a Comment

    Leave a Comment