Growing Concern Over Hard to Detect Fake AI Content
Advocacy Group Issues Red Flag Over Proliferation of High-Risk AI Content
September 2025
Seattle, USA – The rapid advancement in AI generated content makes it increasingly difficult for humans to identify fake information used to aid the proliferation of scams and cybercrime, asserts the Global Braintrust, a grassroots group formed to advocate for human interests in AI adoption.
The Global BrainTrust advocates for more accountability over deepfakes on social media platforms, the removal of harmful content targeting real people, as well as a global governance framework to enforce AI transparency standards.
AI-generated images and videos are being used to spread false information, manipulate public opinion, and perpetuate harmful stereotypes, warns Professor Ahmed Banafa, the Senior Technology Advisor at the Global BrainTrust. This is especially alarming because such content can also be disseminated faster than ever before through social media.
There needs to be more concerted collaboration among stakeholders to upgrade security software platforms,” said Sana Bagersh, Founder of the Global BrainTrus. “At the same there needs to broader education of the public about the evolving risks, so that they can make informed decisions and adjust their usage patterns.”
“Recently, a troubling wave of fabricated videos has emerged emulating President Ibrahim Traoré of Burkina Faso, attributing to him words and speeches he has never spoken,” said Professor Brie Alexander, advisor to the Global BrainTrust. “We need to criminalize political deepfakes, establishing penalties for both creators and distributors. We also need to require clear labelling of AI-generated or manipulated media and enforce traceable digital watermarks to ensure accountability.”
Professor Abdul Rahim Sabouni, advisor to the Global BrainTrust pointed to the spread of deception into education. “As AI-generated content becomes increasingly indistinguishable from authentic human output, higher education faces a pivotal challenge: preserving academic integrity while embracing technological advancement. Institutions must urgently invest in digital literacy, not just detection tools.”
“We need to teach students how to critically evaluate sources, understand algorithmic bias, and navigate a world where ‘truth’ may be synthetically constructed. The concern isn’t merely about identifying fakes—it’s about cultivating discernment in a generation raised alongside intelligent machines,” he added.
Ahsan Ahmad, a member of the Global BrainTrust, believes that almost 80% of his YouTube feed is AI-generated content using voices and names of known personalities, such as Nouman Ali Khan, Mufti Menk, and even Mel Robbins. “The videos aren’t necessarily bad, they sound like the usual self-help advice, but after a while I start thinking, wait, that doesn’t really sound like something they’d say. Yet it’s not labelled as AI.”
Ahmad said that judging from the volume of likes and comments, people tend to believe that the content is real. “Right now, it’s being used to make money, but it’s scary how easily the same tech could also be used for propaganda. We need to mark AI generated content, so that consumers know.”
Another member of the Global BrainTrust, Kaoutar Najad, explains that the incorrect use of AI can also result in fake news, misinformation. ”For instance, if a user explicitly directs an AI agent to prioritize trusted sources, such as reputable institutions or verified articles, the risk of harmful outputs can be significantly reduced.”
Najad applauds platforms like DeepSeek, that are transparent in their display of ‘Search’ and ‘DeepThink’ modes. “This distinction allows users to understand how the AI processes information and gives them the ability to refine or redirect queries for more reliable results. So, by actively engaging with AI in this way, guiding its reasoning and verifying its sources, users can foster healthier, more accurate outcomes while mitigating the spread of misinformation.”
Professor Banafa agrees that the lack of transparency and interpretability of AI algorithms can be a problem. “While AI-generated content may appear convincing, it can be difficult to understand how the algorithm arrived at the specific output. This can lead to a lack of accountability and potential bias in the content produced, as well as difficulty in detecting any harmful content generated by the AI.”
Bagersh added that there also needs to be an upskilling of workers across all industries to establish and follow best practices. “This includes boosting critical thinking capabilities so those who work in the information pipeline can question unusual requests, even from seemingly trusted sources. “We need advanced verification solutions and the adoption of increased vigilance to spot inconsistencies and discrepancies”.
Ends