Primary Research Final Image

Beyond the Desk: Why Primary Research Matters for Social Impact Programming

Authors
Guest Authors
Picture of Vedika Agrawal
Vedika Agrawal

Motilal Oswal Foundation

Social sector practitioners are debating the role of AI for advancing development goals, including how AI is reshaping the way we gather and analyse information. As the explosion of AI accelerates data accessibility through desk research, its tools promise comprehensive insights at lower costs. However, this perceived promise of what AI can do is contributing to a worrying trend of reduced emphasis on primary research.

Primary research is a method of gathering new, original data directly from ground-level stakeholders (e.g., through interviews, surveys, observations, or focus groups) to answer a specific question. Unlike secondary research, which interprets information collected by others (e.g., in published reports), primary research provides direct, current, and reliable information.

Through more than a decade of going to the field, meeting with communities, and listening to their lived experiences, we know that primary research uncovers nuance, fills critical knowledge gaps, and surfaces our own biases in ways that AI-generated research never could.

A shift away from primary research risks developing programs that look sophisticated on paper but ultimately fail the communities they aim to serve. As we consider this trend, we ask: In the age of AI and on-demand access to information, what is the role of primary research in designing programs that deliver impact?

Why Primary Research Still Matters

The answer lies in understanding what primary research uniquely provides to ensure interventions actually work, despite the availability of increasingly sophisticated desk research/ AI analysis. 

Primary research plays a vital role in grounding social impact work in reality, to make solutions effective and inclusive:

  • Fills critical data gaps, particularly in low-income contexts where customer preferences and behaviours may not be studied systematically.
  • Goes beyond describing what is happening to uncover why people make certain choices, resist certain technologies, or embrace others.
  • Checks implicit biases, as even experienced practitioners carry assumptions shaped by reports, datasets, and frameworks.

Practitioners often ask, why can’t AI do this? While AI can rapidly synthesise existing information, it is fundamentally limited by the data it is trained on. Large language models (LLMs) generate answers by identifying patterns from data that already exists. This can be powerful for spotting macro trends or early research. But in the social sector, challenges are deeply contextual: two districts may have identical challenges yet entirely different root causes, e.g., water access constraints in neighbouring communities can emerge from lack of political incentives in one and unserviceable terrains in the other. AI cannot fill data gaps in places where information is scarce, which is the case for most low-resource contexts.

Additionally, if the underlying data carries biases, as it often does, AI can inadvertently amplify inequities by inheriting the blind spots of the data. If certain geographies, populations, or behaviours are under-represented in existing datasets, LLMs will generate insights that reflect those gaps and reinforce patterns, e.g., about women’s labour participation, informal workers’ reliability, or the “creditworthiness” of low-income households. Even when the bias is subtle, AI can flatten nuance by averaging across contexts, creating “typical” profiles or explanations that erase local differences. In a sector where inequity often stems from precisely these overlooked differences, e.g., gender norms, informal power structures, relying on AI alone risks designing programs around what is statistically common rather than what is locally true.

The significance of primary research isn’t just theoretical—it emerges consistently in practice. As advisors and implements of social impact program, we have spoken to thousands of households, stakeholders, and market actors across sectors. These experiences have reshaped early hypotheses and enabled effective insights, as seen in three examples.

Example 1: Formalising private water vendors in Kenya

We wanted to understand how utilities could formalise private water vendors to improve access to safer water. Our early hypothesis was that the barrier to formalising vendors was finding and convincing them. We assumed vendors would resist formalisation because it might reduce their profits or territories.

However, interviews with vendors and utilities revealed that vendors welcomed the formal status as it improved their legitimacy with communities and trust with utilities. Furthermore, vendors embraced safety standards as their businesses continued to be viable, and they experienced outweighed benefits of getting recognition.

Primary research helped uncover the real barrier: local authorities’ reluctance to formalise vendors, viewing them as competitors rather than partners, and utilities’ limited awareness of how formalisation could extend their reach without compromising economics or water quality.

This shifted our focus from designing vendor onboarding and compliance approaches to building political will and collaborative business models. With reliance on AI-generated desk research, we would likely have reinforced a control-oriented approach towards vendors, mirroring how many countries still engage with private providers, rather than identifying partnership and local political will as the true leverage points.

Example 2: Parents’ Preferences for Early Education in India

Our objective was to improve early education for children from low-income families in India. Our starting assumption was that parents from these families did not prefer private schooling for early education. However, surveys with more than 5,000 low-income families revealed that many parents were actively choosing affordable private schools (APSs), believing these schools offered better early learning for English and Mathematics.

This insight allowed us to support our partners (activity-based learning providers, i.e., ABL) to encourage APSs to adopt ABL solutions that improve learning outcomes.

Without this primary research, we would have focused on government schools as the primary pathway for impact, considering their scale, and overlooked APSs as a critical segment for improving early education for our target group. To date, more than 150,000 children in APSs have gained access to ABL and children in APSs have shown improved learning compared to their peers in schools with no ABL.

Why It’s Hard—And How to Think About Doing It

Given the benefits of primary research, what is hard about doing it? We see three key concerns and share our perspective on how to navigate them:

Lack of clarity on depth required: Practitioners often wonder how much primary research is “good enough,” e.g., should they plan for large, targeted samples or diverse samples or both. Some typical use cases can provide guidance:

  • Quick interviews with a small sample to uncover behaviours.
  • Focus groups to explore the “why” behind behaviours.
  • Small but representative surveys to validate the prevalence of behaviours.
  • Large-scale quantitative surveys for setting baselines or evaluating outcomes.

Effectiveness comes from carefully choosing the right method for the stage of research. Practitioners can start small and be iterative, as even a few interviews with stakeholders can uncover significant insights.

Initial cost and time investment: Primary research does require upfront investment of time and resources, which can be at odds with the urgency to roll out programs or disburse funding. Yet the ROI is almost always higher: the cost of good research is far lower than the cost of flawed program design. Smarter design can enable some reduction in costs, e.g., targeted short surveys, SMS-based polling, or interviews where relationships already exist.

Furthermore, AI solutions offer promising ways to reduce some of the burden, such as lowering data collection costs, synthesising interview notes, or spotting early patterns across transcripts. But AI should complement fieldwork; its outputs still depend on human insight from communities. Instead of choosing between AI and primary research, a better approach is to combine them so that community voice is captured more efficiently.

Hesitation to conduct research in unfamiliar cultural contexts: Practitioners may hesitate to conduct primary research in new geographies, unsure of how to ask sensitive questions or interpret community dynamics. A common approach is to use cultural brokers or hire an external partner that has experience in multiple contexts. Additionally, partnering with local NGOs, community-based organisations, or experienced researchers brings cultural fluency, existing trust, and local knowledge.

Returning to our opening question: What is the role of primary research in designing programs that deliver impact, especially in the age of AI? Primary research brings depth and nuance to all impact interventions. Primary research will always take more effort than pulling data from a report or running an AI-powered scan of the literature. However, the cost of good research is always cheaper than the cost of bad program design, wasted resources, and interventions that don’t resonate with the people they aim to serve. Primary research must remain at the centre of how we design for social impact.

Close

Sign Up to Download

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Already signed up? Enter your email

Confirm Your Registration

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Secret Link