Artificial intelligence is often framed as a tool capable of faster, cheaper, and large-scale operations, making it particularly appealing to philanthropy. AI promises better targeting of welfare, insights from complex data, and new ways to reach historically underserved communities. However, Dr. Sarayu Natarajan, Founder of the Aapti Institute, emphasized at the Asia-Pacific Meeting on AI and Philanthropy that treating AI as just another tool overlooks the distinctive conditions shaping its development. These include the concentration of data and influence in a few private companies, the retreat of governments from parts of public life, and digital systems that fragment shared understanding of the world. These factors not only determine AI’s capabilities but also its societal impact.
For philanthropy, the question is no longer whether AI will have an impact, but what role philanthropic actors choose to play within this ecosystem. Natarajan noted that philanthropy is often absent from AI governance and policy discussions, even though its values are directly implicated. Entering the conversation late risks endorsing standards already set and harms already occurring. She framed philanthropy’s responsibilities as twofold: doing more of the good, and doing less of the harm.
Doing more of the good involves supporting public-interest AI—technology designed for equity rather than profit or scale. Examples include language-based tools that help individuals access government services, particularly in multilingual contexts where language barriers can exacerbate exclusion. Philanthropy can invest in inclusive data, better oversight, and capacity-building to ensure such tools are effective.
Capacity is as critical as data. Many organizations and communities hold rich qualitative knowledge but lack the means to analyze or act on it. AI can help integrate insights from qualitative material, enabling philanthropies and governments to identify patterns, target support efficiently, and reconsider metrics of impact. Natarajan emphasized that technological failure is often human in origin: systems fail when they ignore how people live and make decisions, highlighting the importance of supporting users alongside funding technology.
The second responsibility, limiting harm, is more complex but equally vital. AI poses risks to democracy through the rapid spread of misinformation, influences personal freedoms subtly via algorithmic shaping of choices, and has significant environmental impacts due to its resource demands. Philanthropy can help by funding research, public discourse, and initiatives that prioritize ethical, social, and environmental goals, even when such efforts are politically or operationally challenging.
Despite the risks, Natarajan argued that AI can be transformative if used carefully. Effective use requires long-term funding for public-interest technology, ethical decision-making in daily practice, and support for initiatives prioritizing social and environmental outcomes. She concluded by stressing that philanthropy must act with confidence and purpose, shaping the conditions in which AI is built and used to ensure care, inclusion, and accountability.




