Philanthropic foundations around the world are beginning to experiment with artificial intelligence (AI) to review proposals, stay up-to-date on the latest research, communicate insights to different audiences, and more. However, questions remain around where AI is most valuable across the grant making cycle, when it should not be used, and what practices and policies are needed to ensure it is applied responsibly.

To address these questions, DATA4Philanthropy reviewed how AI is being used across the grantmaking cycle. This includes: problem definition, prioritization, strategy development, partner identification, grant management, and evaluation and learning. Drawing on desk research conducted between July and December 2025, the primer highlights several examples where philanthropies are already using AI in their work and how they are incorporating human judgement throughout the process. It concludes with a series of recommendations on how philanthropies might begin experimenting with AI.
In what follows we highlight the main takeaways from this piece. However, it is important to note that the AI landscape is rapidly evolving. This primer provides a snapshot of a particular point in time and may not reflect the latest AI developments and recommendations.
How AI is being used across the grantmaking cycle
|
Grantmaking Phase |
Application Areas |
Conditions and Requirements |
Examples |
|
Problem Definition |
Reviewing reports, research, past grants, and other materials to better understand an issue area, spot gaps, and identify emerging needs or trends. |
A clear purpose tied to the question being explored; data that reflects the populations and issues of interest; regular checks to avoid reinforcing blind spots or biased assumptions. |
King Baudouin Foundation: Used natural language processing (NLP) to categorize its grant portfolio, surface emerging themes, and identify gaps not visible through manual review. |
|
Prioritization |
Grouping similar proposals, summarizing community or stakeholder input, and helping staff compare options against strategic goals. |
Human reviewers retain final authority; transparent criteria for comparison; safeguards to detect both algorithmic and human bias; clear communication and appeal routes for applicants. |
“La Caixa” Foundation: Piloted NLP-based prescreening to cluster research proposals, while retaining human reviewer control over final decisions. |
|
Strategy Development |
Synthesizing insights from historical grants and evaluations, evaluating theories of change, testing alternative scenarios, and monitoring stakeholder sentiment or public discourse. |
Alignment with mission and long-term goals; collaboration across teams; clarity about when and how AI is used; governance that builds on existing data and AI policies. |
Large US foundation (anonymized): Uses an NLP-based knowledge system to query past grants and evaluations to inform strategy and test assumptions. |
|
Partner Identification |
Mapping who is working on similar issues, identifying potential partners, and surfacing organizations that may be overlooked. |
Human oversight of AI-suggested matches; deliberate pairing of AI-assisted discovery with field engagement to reach emerging or less visible actors; regular review of models for bias. |
Giving Balkans / CiviGraph: Uses AI-assisted network mapping to visualize philanthropic activity and coordination opportunities; Altruist League: Applies machine learning to support funder-partner matching. |
|
Grant management |
Supporting proposal intake and review by summarizing submissions, checking eligibility, flagging financial or operational risks, and retrieving relevant precedents. |
Clear decision criteria; staff remain final decision-makers; strong data privacy protections; careful use of commercial tools. |
Patrick J. McGovern Foundation: Uses generative AI to guide applicants and support staff review; Grant Guardian: Summarizes nonprofit financial filings to support due diligence. |
|
Evaluate and Learn |
Synthesizing grantee reports, evaluations, and external literature to understand what is working and support learning across programs. |
Attention to long-term and complex outcomes; regular updating and documentation; awareness of data and capacity limits. |
Wellcome Trust: Uses semantic linking to connect grants with resulting research publications; Fondazione AIS: Applied AI to synthesize fragmented impact frameworks into a shared learning model; Better Society Capital: Tested generative AI to apply impact frameworks across its portfolio. |
|
Cross-Cutting Applications |
Supporting everyday work across the organization, including internal synthesis and decision support, organizing documentation and institutional memory, applicant-facing engagement and feedback, inter-foundation knowledge sharing, and communication of findings to different audiences. |
Organization-wide guidance on AI use; strong data governance; attention to vendor choice, mission alignment, and environmental footprint; use of automation to enhance staff judgment rather than replace it. |
Annenberg Foundation: Implemented organization-wide AI governance, staff training, and internal tools for document management and synthesis. |
Key takeaways
-
AI use remains nascent: AI is being applied unevenly across the philanthropic sector. Foundations are interested in exploring its potential but remain cautious. While this primer highlights several examples of its use, the majority of AI use cases have not been made publicly available—reflecting a broader trend of limited transparency across the ecosystem.
-
Using AI to improve efficiency: The case studies identified tend to focus on using AI to manage large quantities of information and improve the speed of tasks. This includes summarizing long reports, comparing large numbers of applications, retrieving past decisions, and synthesizing findings across portfolios.
-
The importance of human judgement: The case studies highlight the continued importance of using AI to complement or support human judgement rather than replace it. These case studies underscore the need for clear decision criteria, attention to data quality and bias, and deliberate efforts to ensure that AI does not crowd out relational work or overlook less visible actors.
Click here to read the full primer.
***
What other topics would you like to see on DATA4Philanthropy? Let us know by emailing us at DATA4Philanthropy@thegovlab.org. Or, do you know of any great case studies that should be featured on the platform?
Submit a case study to the DATA4Philanthropy website here.
Stay up-to-date on the latest blog posts by signing up for the DATA4Philanthropy Network here.