The Launch & Vision of Humanity AI
On October 14, 2025, a broad coalition of leading philanthropic foundations announced the launch of Humanity AI, a $500 million, five-year initiative aimed at ensuring that artificial intelligence develops in ways that serve people and communities, not just markets and corporations.
The coalition brings together ten major funders: the MacArthur Foundation, Ford Foundation, Omidyar Network, Mozilla Foundation, Mellon Foundation, Lumina Foundation, Kapor Foundation, Doris Duke Foundation, David & Lucile Packard Foundation, and Siegel Family Endowment.
The fund will be hosted by Rockefeller Philanthropy Advisors, which will oversee grant distribution, coordination and impact evaluation. The initiative’s first round of funding is expected to roll out in early 2026, with a focus on building global capacity for responsible AI governance and human-centered technology design.
At its core, Humanity AI seeks to reimagine how artificial intelligence interacts with public life, ensuring that technological progress enhances democracy, equity, creativity and safety.

Why Does a People-Centered AI Initiative Matters Now?
Artificial intelligence now influences nearly every sphere of daily life, from education and employment to media, healthcare and governance. But as AI systems grow more powerful, so do the risks of bias, exclusion, surveillance and cultural homogenization.
Humanity AI was conceived as a corrective to this imbalance, placing human needs and ethical design at the center of AI innovation. The coalition’s founders argue that today’s technology landscape is overly shaped by commercial incentives and profit-driven models.
This initiative reframes AI’s purpose, not as a replacement for human decision-making, but as a partner that amplifies human potential while protecting rights, dignity and creative freedom.
Where Humanity AI Will Focus Its Grants?
Humanity AI’s grant strategy centers on five key focus areas, designed to intersect with real-world social challenges:
- Democracy & Civic Systems: Strengthening transparency, inclusion and accountability in algorithmic governance.
- Education & Learning Equity: Expanding access to AI-powered learning while protecting student data and pedagogical diversity.
- Culture & Creative Rights: Supporting artists, journalists and creators affected by synthetic media and intellectual property misuse.
- Labor & Economic Inclusion: Ensuring that AI augments work, rather than replacing or devaluing it.
- Safety, Privacy & Security: Safeguarding people’s identities and personal data in an AI-driven world.
This multidimensional approach aims to bridge the gap between technological power and societal well-being, creating space for innovation rooted in fairness and participation.
A Global Turning Point for AI Governance
Humanity AI’s launch comes at a crucial moment in the global conversation around AI regulation. Governments and policymakers worldwide are racing to define boundaries for responsible innovation, from the European Union’s AI Act emphasizing transparency and labeling, to the US Executive Order on Safe and Trustworthy AI and India’s Digital India AI Mission, which focuses on inclusion and indigenous innovation.
Amid this momentum, Humanity AI fills a critical gap: the civil society and human rights perspective. It ensures that the voices shaping AI’s evolution include educators, creators and communities and not just corporations and regulators.
This intersection of philanthropy, governance and technology marks a shift toward multi-stakeholder stewardship, where the future of AI is designed collectively, with humanity at its center.
Challenges & Risks to Watch
While the Humanity AI fund represents one of the largest philanthropic collaborations on technology to date, its goals are ambitious. Several challenges lie ahead:
- Scale vs. Impact: Competing with trillion-dollar AI corporations will require precise, targeted interventions rather than broad strokes.
- Accountability: Measuring impact across social outcomes and policy changes is complex and time-intensive.
- Inclusivity: Ensuring that local and regional communities, especially in the Global South, are part of the decision-making process.
- Longevity: Building frameworks that outlast the initiative’s five-year timeline and sustain global collaboration.
The coalition’s design: cross-sectoral, multi-foundation and globally inclusive suggests an understanding of these difficulties, but execution will determine its long-term relevance.
The launch of Humanity AI marks a quiet but significant shift in how global power structures approach artificial intelligence.
For years, the AI narrative has been dominated by corporations, venture capital and competitive innovation. Humanity AI represents something different, a collective moral experiment in shared technological stewardship. If successful, it could influence the next decade of AI governance, proving that human agency and public good can coexist with progress.
In a world racing to build smarter machines, Humanity AI offers a reminder that intelligence, whether artificial or otherwise, must always remain in service to humanity.

