Cash Macanaya X9 Cemmq4 Yj M Unsplash

Image from Unsplash by Cash Macanaya

Nonprofits often rely on interviews, focus groups, and open-ended surveys to understand community needs and assess program effectiveness. While these methods provide valuable insights, analyzing them at scale can be time and resource intensive. Questions remain around whether generative artificial intelligence (GenAI) can support qualitative analysis in philanthropy while incorporating the nuances of the information sources.

Candid, a nonprofit organization that provides data and research on the social sector, published an article titled ‘Can nonprofits leverage AI to understand community feedback?’, authored by Stephanie Wormington. The article describes an internal experiment in which Candid’s research team used large language models (LLMs), specifically Claude, to analyze interview data, with the goal of understanding when AI can meaningfully support sense-making.

The article examines how AI-assisted analysis and human judgement can be combined to surface patterns in qualitative data. The article highlights aspects of the analytic process, including how interview data was prepared and how expectations were defined for both AI and human reviewers.

Candid's Experiment

Candid’s research team tested a hybrid approach by asking both the LLM and human researchers to analyze 24 interviews with nonprofit leaders. 

Across 11 interview questions, both the AI and human reviewers identified many of the same themes and selected similar quotes to illustrate them. The number of interviews associated with each theme was also similar, differing by no more than one interview in each case. Taken together, these results suggested that both approaches were identifying the same underlying patterns.

At the same time, the experiment revealed differences in how AI and human reviewers contributed to the analysis. AI was able to process long interview transcripts quickly and apply a consistent analytic approach across all responses. Human reviewers, by contrast, were better positioned to interpret cultural context, coded language, and intersectional experiences. For example, while the AI identified gender-related themes, human reviewers were better able to recognize how those experiences varied across leaders from different racial and ethnic backgrounds.

How AI and Human Reviewers Can Work Together

Based on this experiment, Candid suggests that combining AI-assisted coding with human oversight can make qualitative analysis more efficient without sacrificing depth. Rather than replacing human judgment, AI can help process large volumes of unstructured data, allowing human reviewers to focus on interpretation and review.

To operationalize this approach, Candid outlines several practical steps:

  • Selecting reviewers: Organizations should identify appropriate AI tools and human reviewers based on the nature of the data and the questions being asked. This includes assessing AI tools’ data security practices and alignment with organizational policies.

  • Providing structured instructions: Clear guidance is essential for both AI and human reviewers. Instructions should specify the role reviewers should adopt, the context of the data, the questions being analyzed, and the expected outputs (such as themes, counts, or summaries).

  • Comparing outputs: Human reviewers should assess where AI and human analyses align, where they offer complementary insights, and where differences require further review.

Screenshot of article from Candid, ‘Can nonprofits leverage AI to understand community feedback?

Three Key Takeaways for the DATA4Philanthropy Network

  • AI as an Analytic Support Tool: Candid’s experiment suggests that LLMs can support qualitative analysis by processing large volumes of information quickly, identifying patterns, and surfacing key quotes. Also, LLMs can assist with the initial sense-making of unstructured feedback.

  • The Importance of Human Review: While AI and human reviewers surfaced similar patterns, Candid’s findings underscore the continued importance of human oversight in interpreting qualitative data. Human reviewers were better able to recognize cultural nuance, coded language, and intersectional experiences.

  • Using AI Responsibly: Candid’s article highlights several considerations for responsible AI-assisted qualitative analysis, including providing clear analytic instructions, protecting participant privacy, and maintaining human oversight throughout the analysis process. In practice, this involved defining reviewer roles and expected outputs in advance, assessing AI tools’ data security practices, and having human experts compare and validate AI-generated outputs before producing final findings.

Read the full article here

***

What other topics would you like to see on DATA4Philanthropy? Let us know by emailing us at DATA4Philanthropy@thegovlab.org.

Or do you know of any great case studies that should be featured on the platform? Submit a case study to the DATA4Philanthropy website here.

Stay up-to-date on the latest blog posts by signing up for the DATA4Philanthropy Network here.