This blog was originally posted on the Impact Europe website on September 30th, 2025 and reposted on DATA4Philanthropy. Please find the original blog HERE

Impact Europe recently convened a thought-provoking roundtable moderated by Ciro Cattuto, the Scientific Director of ISI Foundation and Board Member of OGR – CRT. The conversation brought together two leading voices in the field:

Together, they shed light on both the opportunities and tensions that artificial intelligence (AI) presents for the impact ecosystem.

Polarised Perspectives on AI

Stefaan opened the conversation by highlighting the divide in how AI is perceived across the world. While investors and users in the Global North often express fears and emphasise regulatory guiderails, the Global South tends to approach AI with optimism and urgency—seeking to harness its benefits without delay.

He pointed to the ongoing polarisation of the AI conversation: fear of risks on one side, excitement about potential on the other. Investors, for instance, remain cautious, noting they have not yet seen clear evidence of AI’s impact returns. Some interesting recorded sessions are available here: Data on Purpose 2025 Session Recordings, kicked off by a keynote conversation between Mike Kubzansky, CEO of Omidyar Network, and Priya Shanker, executive director of the Stanford Center on Philanthropy and Civil Society.

Data: The Core Challenge

A recurring theme was the decline in data accessibility. Stefaan warned that data holders are increasingly wary due to extractive corporate behaviour. This, he argued, calls for new institutional arrangements around data use, such as data commons.

He stressed that governance must evolve: while structured datasets remain essential, AI also unlocks new possibilities for unstructured content. Still, issues like copyright, privacy, and deep fakes demand urgent upgrades in data and AI governance frameworks. Two publications by Reframe Venture (Responsible and Impactful Data and AI) and Better Society Capital (We Tested AI Impact Assessments - Here's What We Learned) on Stanford Social Innovation Review show interesting highlights respectively on responsible use of data and on the use of AI in due diligence phases.

Talent, Sovereignty, and Dependency

Another pressing concern is the war for AI talent. With private sector salaries skyrocketing, it becomes harder for non-profits and foundations to retain the expertise needed to develop AI for good.

Stefaan also underlined the danger of dependency: access to data and infrastructure can be weaponised by governments or corporations. This makes digital sovereignty and digital self-determination critical. However, sovereignty alone is not enough—the real question is how society collectively defines the problems AI should solve.

Augmentation over Automation

When considering AI’s role in philanthropy and beyond, Stefaan emphasised the importance of augmentation rather than automation. He pointed to several potential applications across the grantmaking cycle—from problem identification to partner matching, project management, and impact assessment—where AI can complement rather than replace human judgment.

La Caixa Foundation’s AI Journey

Carla shared La Caixa Foundation’s pioneering use of AI in managing research project proposals. Their journey began from necessity: expecting almost a thousand applications in the first edition of the call, to process and limited staff, traditional manual review matching was unsustainable.

The success of the AI-assisted tools for the matching process took us, in 2021, to launch a pilot project to test AI in the content eligibility phase of grant selection. Proposals were assessed by three algorithms, and only those unanimously flagged as weak were sent to two reviewers that confirmed or not their rejection. If even one of the reviewers had a doubt about the flag, the application was returned for revision. Importantly, no personal data was used—only project information—ensuring safety, privacy, and less risk of bias.

This hybrid model, combining AI with human oversight, allowed the foundation to process over 700 proposals with a success rate of around 2.5–4%, accelerating timelines and improving fairness, removing from the ordinary process around 12% of the initial proposals.


From Pilots to Pioneering Practices

Over the past four years, La Caixa Foundation has refined its AI processes, working closely with legal and compliance teams to prevent bias and uphold ethics. Their approach was presented at the Meta Science Conference 2025, where they stood out as one of the few foundations to have fully integrated AI in their operations.

Key milestones included:

  • Applying the NIH tree structure Mesh Terms for keywords, ensuring granularity in proposal assessment.
  • Applying AI to detect possible mistakes in feedback reports provided to applicants.
  • Using AI algorithms to detect proposals with low probability of being selected and removing them from the evaluation pathway after human confirmation.
  • All this has been developed with a UPC startup, and it is open to be adapted by other foundations.

Carla noted that while foundations have no problem experimenting with lotteries to decide among equally strong applications, resistance to AI is paradoxically stronger—even though AI offers greater transparency and data-driven fairness.

Lessons Learned

Both speakers underscored the importance of:

  • Safe spaces for foundations to share experiences and concerns.
  • Transparency, such as publishing guidelines and processes.
  • Gradual scaling, starting with urgent needs and piloting responsibly before expanding.
  • Ethical safeguards, with close collaboration between program teams, legal, and compliance experts.

Looking Ahead

The roundtable made clear that AI’s role in philanthropy and impact is still unfolding. While risks of bias, dependency, and misuse remain, pioneering foundations like La Caixa are proving that AI can accelerate and strengthen the grantmaking process when applied thoughtfully.

The journey ahead requires collaboration: bridging optimism and caution, aligning data governance with AI governance, and ensuring society itself has a voice in shaping how AI is used.

You can continue the debate with us at Impact Week Malmö. We’re bringing Europe’s AI for impact investors and thought leaders together for a programme that shines light on solutions and controversial questions alike. Join us, 18–20 November.