THE GLOBAL BRAINTRUST
We are a human response to an
increasingly AI-influenced world.
We are a coalition of business, institutional and community leaders from different nations who present ‘the human response’ to the rapidly evolving challenges and opportunities emerging from an increasingly AI-powered world.
THINK. HUMAN. FIRST.
a human coalition
The End Game
As artificial intelligence becomes widespread, humans must ensure our collective interests are represented and safeguarded. New policies, challenges, and issues will continue to emerge with the adoption of AI systems worldwide, and through the Global BrainTrust we aim to present a human voice, a unified response, to navigate this rapidly shifting landscape. The Global BrainTrust is a broad, inclusive coalition of thought leaders across different sectors who offer guidance on the responsible and ethical adoption of AI, to uphold human dignity, justice, and the common good.
AI in the Economy
AI is transforming business through automation and improved efficiency. Entities from manufacturing and agriculture to financial services and transportation must be purposeful and secure in their adoption.
In healthcare, for instance, AI is analyzing and scanning data to identify trends and speed up drug development. AI holds promise for personalized medicine and early disease detection.
AI in Policymaking
AI is being adopted by public sector organizations to improve service delivery, and inform policy decisions. Governments must enact frameworks to govern its ethical use.
AI is being applied in law enforcement and surveillance through facial recognition, predictive policing, and threat assessment. Autonomous response systems also raise many unanswered ethical questions.
AI in the Environment
AI can analyze data to optimize energy usage, model climate change scenarios, monitor environmental changes, and find solutions for issues like pollution and endangered species.
From smart devices to media recommendations, AI is influencing how people communicate, shop, travel, and are entertained. Yet AI also raises a multitude of concerns about privacy and addiction.
Where We Come In
Artificial intelligence (AI) promises immense benefits, yet also poses complex challenges for humanity. To thrive in an AI-powered world, we must optimize the opportunities and benefits brought about by greater AI adoption. As we move with broader adoption of AI technologies, we support greater communication, collaboration, understanding, empathy, peace-making and bridge-building.
We support those industries, platforms, and policies that are centered around humans. Our Global BrainTrust coalition will strive to connect, inform, support and present the human perspective. We will advocate for a future where human dignity, human potential and human communities are valued, and where our differences are celebrated. Our diverse backgrounds will be the strength of our coalition.
Who We Are
The AI BrainTrust convenes researchers, ethicists, policymakers, lawyers, entrepreneurs and civil society leaders. Together, we represent diverse backgrounds, ideologies and interests, modeling how groups with disparate views can have thoughtful dialogue to find wise solutions. Our membership reflects the full spectrum of human values and concerns regarding AI.
What We Do
When critical issues relating to AI emerge in the public discourse, the BrainTrust meets to discuss and draft unified policy recommendations. We aim to provide stakeholders in government, business and civil society with well-informed, nuanced perspectives on how best to advance AI for the benefit of all. Though our views may vary, we are united by shared principles – that AI must be developed and used in ways that::
– Respect human rights and dignity
– Mitigate bias and discrimination
– Protect privacy and autonomy
– Maintain transparency and accountability
– Promote prosperity broadly
– Augment rather than replace human capabilities
By upholding these principles, we can realize the profound good AI may do, while vigilantly guarding against potential misuse or unintended consequences. As a thought leader, your voice is essential to represent the diversity of human interests regarding this powerful technology. By lending your voice to the BrainTrust, you will help provide balanced policy guidance to anticipate and address ethical risks before they become crises.
How We Do It
01
ADVOCACY AND INFLUENCE
Establish a defined and transparent approach to advocacy by selecting Braintrust members for their expertise, ethics, influence, and capacity to become advocates for greater humanity and a safer, more prosperous and sustainable world.
02
VALUABLE INSIGHTS
Develop insights to inform actions addressing current and emerging issues in AI adoption. Identify data gaps and advocate for research investment by decision makers and greater attention to testing, security and human-focus.
03
MARKETING COMMUNICATIONS
Develop a marketing and communications strategy that includes relationships with journalists and media outlets, as well as partnerships with other influential agencies and institutions to increase media presence and share of voice.
“We face a momentous opportunity – and responsibility – to steer AI toward a future of empowerment and shared prosperity.
Sana Bagersh, Founder
By working together in the spirit of good faith, understanding and wisdom, we can help realize AI’s benefits, while mitigating the risks. ”
FACTS
Over 90% of customer interactions will be handled by AI by 2025 according to Servion Global Solutions.
What defines us
Autonomous vehicles, powered by AI, are projected to dominate the roads. There are expected to be over 21 million autonomous cars by 2030.
INSIGHTS
What Tech Leaders Say
ELIEZER YUDKOWSKY
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
★ ★ ★ ★ ★
SATYA NADELLA
“We’ve seen how AI can be applied for good, but we must also guard against its unintended consequences. Now is the time to examine how we build AI responsibly and avoid a race to the bottom. This requires both the private and public sectors to take action.”
★ ★ ★ ★ ★
RAY KURZWEIL
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.”
★ ★ ★ ★ ★
TIM COOK
What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.
★ ★ ★ ★ ★
ELON MUSK
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.”
★ ★ ★ ★ ★
JEFF BEZOS
We’re at the beginning of a golden age of AI. Recent advancements have already led to invention that previously lived in the realm of science fiction — and we’ve only scratched the surface of what’s possible.
★ ★ ★ ★ ★
Need Answers?
Common AI Concerns
Could AI systems become so advanced they harm humans?
There are concerns about AI potentially becoming uncontrollable and intentionally or unintentionally causing damage. Safeguards like human oversight and testing are important.
What if AI makes mistakes or has biases?
AI is based on data and algorithms created by humans. Errors or biases in the data or code could lead to problems like discriminatory decisions. Ongoing auditing is required.
How susceptible is AI to cyberattacks?
AI systems and the data they use could be vulnerable to hackers looking to steal information or manipulate the AI’s behavior. Security measures need to keep pace with AI advances.
Will AI take people’s jobs and increase unemployment?
Though AI can automate tasks and eliminate some jobs, experts believe it may displace certain jobs but historically technology has created more opportunities. Transition programs can help workforce adaption.
Does the use of AI undermine privacy and civil liberties?
AI relies heavily on collecting and analyzing data which heightens privacy risks and ethical questions around surveillance.
Does the use of AI undermine privacy?
Extensive data collection and surveillance needed to train some AI systems raise concerns about privacy erosion. Laws governing appropriate data use are still evolving.
How do we prevent algorithmic bias in AI systems?
Without proper training data and protocols, AI risks automating and amplifying harmful biases that discriminate against minorities.
Can the “black box” nature of AI be dangerous?
Some advanced AI systems are complex and opaque, making it hard to understand internal calculations. Lack of transparency could enable harm, requiring ongoing oversight.
Could AI systems become so advanced they harm or control humans?
Some fear “superintelligent” AI could become extremely dangerous if developed without safeguards.
Are you a business, community or thought leader?
We welcome diversity. If you want to participate, apply.