The Global BrainTrust Sounds Alarm about AI’s Risks on Human Cognition
GRASSROOTS ADVOCACY URGES SPECIAL SAFEGUARDS over education
JULY 2025
Seattle, USA – New research from MIT has revealed the disturbing neurological and psychological effects of unchecked AI usage, prompting the Global BrainTrust to issue a red flag calling for greater oversight over AI’s effect on human cognitive performance.
The new study demonstrates that generative AI tools like ChatGPT may be altering human cognition in fundamental ways, with MIT neuroscientists asserting that regular use of AI assistants alters memory retention and critical analysis skills.
The Global Braintrust, an international grassroots coalition of technologists, academics, business leaders, academics, ethicists and community leaders, advocates for emerging technologies to anchor firmly in human service and safety. The group call attention to results of the brain scans in the MIT study that suggest that when users know AI assistance is available, they offload their own mental tasks, reducing their problem-solving abilities.
The Global BrainTrust proposes a systematic three-pronged response: mandatory “AI nutrition labels” explaining cognitive risks on all consumer interfaces; funded research into mitigating neurological impacts; and educational reforms that teach AI as a complement to, rather than replacement for, human intelligence.
“The implications are profound,” warns Sana Bagersh, Founder of the Global BrainTrust. “We’re seeing the first evidence of what is termed as ‘cognitive disuse atrophy’ – the weakening of essential mental muscles when we delegate too much thinking to AI. This isn’t just about laziness; it’s about effecting permanent changes in how the human brain function, and the risk here is that it could have long-term implications on our species.”
Students relying on AI for schoolwork, are at even greater risk, losing research and synthesis abilities, as well as fundamental critical thinking skills. The right safeguards must be instituted to protect the integrity of critical learning so incoming information is subjected to internal rigorous processing.
“AI has been the easy way out, and that convenience of instant answers comes at the cost of one losing the ability to formulate the right questions,” explains Prof. Ahmed Banafa, Senior Technology Advisor for the Global BrainTrust, who is a Professor at San Jose State University and lecturer in Stanford and Berkeley. “We’re observing a generation that may excel at using AI tools but struggles profoundly with original thought, and that is the very skill that makes us human.”
The risks could impact effective decision-making which requires measured independent thinking and ‘emotional intelligence’ to interpret subtle social cues that are integral to communication.
“What’s disturbing is that users are becoming so comfortable with AI that they have dropped their questioning impulse, and what we’re seeing is greater tolerance for AI’s flawed answers which basically lack nuanced human judgment,” said Banafa.
Another alarming repercussion of AI overuse and the overdependence on algorithmic suggestions is the suppression of natural human creativity and ideation process which is what leads to the emergence of true innovation.
“We stand at a crossroads,” Bagersh concludes. “These are the times that history will judge us, whether we stepped in to introduce the necessary safeguards, or became so enamoured with technology that we failed to act. We all know that AI could be the greatest tool for human advancement, but with the right governance we can unleash a tech-enabled future that should serve all of mankind.”
The Global BrainTrust is a grassroots advocacy platform that brings together a wide range of stakeholders, from different nations as a diverse assembly of voices and perspectives to present a broader ‘human’ view of opportunities and concerns in an increasingly AI-powered world.
Ends