AI Questions Blog Cover Photo_Dec 25

Three Critical Questions for Funders Considering AI for Social Impact

AI is increasingly positioned as a solution to some of global development’s most persistent challenges—from helping smallholder farmers adapt to climate change and supporting community health workers in remote areas, to improving the targeting of cash transfers and strengthening supply chains for essential medicines.

Our conversations with funders in Africa and India reveal that funders are facing competing pressures: excitement about AI’s potential to expand reach and improve outcomes, alongside uncertainty about whether it will work in their specific contexts and what risks it might introduce.

The stakes are high. Without a systematic approach to AI deployment, funders risk investing millions in tools that never reach the communities they’re designed to serve, creating new inequities while trying to solve old ones, and eroding credibility on technology that looks sophisticated on paper but ultimately fails in practice. These aren’t hypothetical scenarios—we’re already seeing AI pilots that falter because they didn’t account for low connectivity, tools that undermine local workers rather than support them, and solutions built for one context or geography that fail when implemented in another.

Allocating already scarce resources to poorly fit AI tools can divert support for proven interventions or more intentional implementation of AI solutions. The time to get this right is now—as AI adoption accelerates, patterns set now will shape how this technology serves (or fails) vulnerable communities for years to come.

Navigating these decisions is harder than with more established technologies. AI lacks the track record and precedent that funders typically rely on when assessing new tools, making it difficult to distinguish between transformative potential and expensive distraction.

What funders and communities need isn’t just another technological solution. Instead, they need a roadmap to assess when, where, and how AI can genuinely accelerate impact.

The three questions below provide funders with practical guidance for navigating these high-stakes decisions with clarity and intention.

1. What problem are we solving, and is AI the right tool to accelerate impact?

Funders often face pressure to move quickly on AI—whether from boards excited about innovation, partners requesting AI pilots, or peers already deploying tools. This urgency can make it tempting to start deploying solutions immediately rather than thinking through the implications. Below are two key considerations to clarify whether AI is the right tool:

Assess the gap in the system: Before thinking about AI, assess the system you’re trying to strengthen and identify the real gap, e.g., a decision that needs to improve, missing or unreliable data, inefficient workflow, or insufficient human capacity. This helps avoid AI becoming a high-tech overlay on an existing shaky foundation. Starting with analysing gaps is standard practice in most programming, but it may unintentionally get skipped when conversations revolve around the excitement of integrating AI tools.

Evaluate if AI adds meaningful, cost-effective value: AI can improve reach, accuracy, evidence-based decision-making, or workflow, but comes at a significant financial cost (e.g., for data collection, tool development, building digital and physical infrastructure, user training, etc.) and in some cases may add avoidable complexity. A cost-benefit comparison with other tools can aid funders in deciding if AI is the right tool. An AI product company stressed that if a workflow can be automated with simpler tools like Excel, AI may only raise costs without improving outcomes.

Additionally, funders should evaluate which parts of the system require human judgment and cannot be AI-led without loss of quality. Community engagement to influence shifts in sanitation practices, for example, depends upon trust-building that technology simply can’t facilitate or replicate.

Getting clear on this first question helps to establish whether AI fits the problem you’re trying to solve. But even the right tool can fail if it doesn’t fit the context or geography where it will be used. The second question asks whether it can work in practice.

2. Can the AI solution actually work in the targeted regions?

Successful use of AI depends on people, institutions, and systems being able to absorb and work with the tool. A solution may look promising on paper yet fail in practice when users cannot navigate or trust it, when infrastructure can’t support it, and when political incentives work against it. 

These considerations shift the question from “Should we use AI?” to “Can we effectively use AI here?”

Assess whether the tool fits hyperlocal realities: The tool should be suited to the digital infrastructure, language, and literacy levels of the targeted communities. For example, an agriculture AI company found that many farmers could not use photo-based or English-only tools, and that speech-to-text models failed in low-connectivity areas until redesigned.

Build around existing workflows and trust structures: AI succeeds when it strengthens the workflows people already rely on and when the new technology is trusted by them. A mental health AI startup in Africa noted that they used AI for improved accuracy of matching existing village-level care providers with those in need, instead of adding a parallel tech-driven treatment platform. Similarly, human-in-the-loop models (e.g., when a trusted community member verifies AI’s recommendation) for communities with low trust in technology can make AI more usable and culturally resonant.

Adapt to political economy: AI use is shaped by political incentives, bureaucratic structures, and evolving policy priorities. For example, countries in North Africa may insist on in-country data storage due to data-sovereignty concerns, while parts of East Africa may be more flexible.

Funders should map these dynamics early and adjust their approach, whether by building political buy-in, designing for current data-governance rules, or preparing to adapt as national AI policies continue to evolve.

Even if it’s determined that an AI solution fits both the problem and the context, introduction and implementation can go wrong if it introduces new harms, which leads to the final question.

3. How will we identify and manage AI’s social and ethical risks?

AI brings new opportunities, but it also carries risks that can undermine equity and trust. When tools are not designed or deployed with care, they can shift who benefits, who is burdened, and who is left out entirely. Funders should manage three equity-related risks: changing livelihoods, data misuse, and inaccuracy of recommendations.

Anticipate and plan for shifts in livelihoods: AI changes who performs which tasks—automating work previously done by people, redistributing responsibilities, or eliminating certain roles entirely. The primary risk is job displacement. For example, AI beneficiary-verification tools could eliminate enumerator positions, while AI diagnostic tools might reduce demand for mid-level health workers. In some cases, AI may create opportunities rather than risks—an AI tutoring platform that automates test grading could free teachers to spend more time on direct student support. But even positive shifts require intentional planning to ensure workers can adapt.

The goal is to achieve a delicate balance between protecting workers while enabling innovation. This includes:

  • Identify affected roles early: Map who currently performs tasks the AI will automate—both direct users and adjacent workers whose roles depend on those tasks—and assess how responsibilities might shift under different adoption scenarios.
  • Monitor for displacement signals: Track reduction in demand for specific roles, changes in how workers spend their time, or feedback indicating reduced reliance on human intermediaries.
  • Prepare mitigation pathways: Before risks materialize, establish options like retraining programs, integrating displaced workers into AI oversight roles, or adjusting deployment speed in vulnerable communities.

Protect and ensure responsible use of data: Equity and trust are compromised when data is collected or used in ways that communities do not understand, consent to, or benefit from. When communities are not informed about what data they are sharing or when sensitive information is stored without adequate safeguards, this data can be misused, and trust can erode quickly. Funders can manage this by ensuring data practices are transparent, compliant with local laws, and designed with user consent at the centre. In practice this means that users are informed about what data is collected and why, trained on how it will be used, and provide data only for the intended purpose.

Mitigate inaccurate recommendations: AI tools trained on datasets from other contexts (e.g., higher income countries) often underperform in low-resource environments: misidentifying crops, misunderstanding disease symptoms, or issuing culturally irrelevant guidance. This inaccuracy can have concrete consequences – farmers applying ineffective treatments and losing income or health workers missing concrete diagnoses. In addition to this, communities may lose trust in the tool and discontinue its use completely.  

Even when trained on the right data, AI is susceptible to “hallucinations” or generating confident but incorrect answers that can mislead users. Funders can mitigate this by (a) training the AI tool with local data, and (b) embedding some human oversight for high-stakes recommendations, e.g., agricultural tools that provide AI-generated advice for farmers can have community-based farmers verifying the recommendations.

Conclusion

Funders are navigating a time where AI’s momentum is outpacing the systems meant to guide its use. Achieving impact through AI will require purpose, intentional deployment, and responsible risk management.

What most funder objectives need is not just another technological tool but a compass to guide them on its use. By asking these three questions early, funders can move past hype and towards solutions that deliver meaningful social impact.

Close

Sign Up to Download

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Already signed up? Enter your email

Confirm Your Registration

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Secret Link