Generative AI for Development: A Compendium of Insights from Grand Challenges Innovators
Credits: Danielle Suijkerbuijk on Unsplash
Report
/
May 2025

Generative AI for Development: A Compendium of Insights from Grand Challenges Innovators

Aarushi Gupta

Read all articles part of the Compendium here.

In late 2022, OpenAI launched ChatGPT, sparking a global surge of interest in the potential of generative AI (GenAI) technologies. Governments, businesses, and individuals began to integrate these powerful tools into their daily processes and lives quickly, heralding a new era of technology adoption.

However, as AI adoption accelerates, a crucial question emerges: Can these technologies drive real impact in low- and middle-income countries (LMICs), where socio-economic challenges are most acute?

While wealthier nations integrate GenAI into their economies and institutions, its potential to address issues such as education access, healthcare disparities, and economic inclusion in LMICs remains largely unexplored. The challenge is not just about access but also about designing GenAI solutions that are contextually relevant, equitable, and capable of serving diverse, resource-constrained communities.

Moreover, in many LMICs, where digital divides, infrastructural constraints, and linguistic diversity pose significant barriers, the question is not just about AI adoption but about its meaningful and equitable application.

To explore this, the Gates Foundation launched the Global Grand Challenges program on Catalyzing Equitable Artificial Intelligence (AI) Use in May 2023, investing up to $7.5 million in 50 projects across Africa, Asia, and Brazil. These grantee organisations seek to test the potential of GenAI — particularly Large Language Models (LLMs) — in tackling some of the world’s most complex development challenges.

A network of experts, led by the South Africa-based Global Center on AI Governance, was also appointed to support the grantees in strengthening the ethical and gender transformational components of their work and in generating evidence about the potential of GenAI to address socio-economic challenges in low-resource settings.

This Compendium brings together writings from grantees of the Global Grand Challenges program, offering critical perspectives on the design, deployment, and potential impact pathways of GenAI in the Majority World. It compiles emerging lessons, insights, and recommendations on how to effectively and responsibly leverage LLMs in developmental contexts within the Majority World.

Grounded in the experiences of locally embedded practitioners, it explores both the opportunities and challenges of leveraging GenAI in diverse socio-economic and linguistic contexts. Through empirical case studies and practitioner reflections, it highlights the adaptive strategies necessary to ensure that LLMs are not only accessible and effective but also socially beneficial in these settings.

A core theme that emerges is the intentional design and customisation of LLMs to better serve underrepresented communities. Several contributions, including those from the Indian Institute of Science (article 6) and The George Institute for Global Health (article 3), highlight the importance of co-creation, emphasising participatory methodologies and iterative development in GenAI tool-building. Similarly, ARMMAN’s use of Wizard of Oz prototyping (article 1) showcases innovative techniques to ensure AI solutions are closely aligned with real-world user needs.

Beyond design, ensuring that the evaluation of LLMs is contextually grounded remains a critical challenge. Standard LLM benchmarks often fail to capture the linguistic, socio-cultural, and infrastructural realities of majority world settings, making it essential to develop more contextualised evaluation metrics, datasets, and methodologies. Contributions such as those from Brazil’s Federal University of Minas Gerais (article 9) underscore the importance of human-led assessments that go beyond accuracy to evaluate bias and the cultural appropriateness of LLM tools. Their work also highlights the pressing challenge of linguistic representation in LLMs, as most of the available pre-trained models are trained primarily on English-language data from Western contexts.

This geographical skew in LLM training datasets is also illustrated by the joint study from Oxford University and Pakistan’s Shaukat Khanum Memorial Cancer Hospital and Research Centre (article 10). Their work revealed that models trained on publicly available datasets struggled to capture the nuances of South Asian medical contexts compared to those fine-tuned on contextually grounded datasets.

Finally, this Compendium surfaces novel and critical use cases emerging from Majority World contexts, where GenAI is being applied to address pressing socio-economic and environmental challenges in hyper-local contexts. NướcGPT , developed by Fulbright University Vietnam and Nuoc Solutions, helps farmers and policymakers in the Mekong Delta access real-time climate data on salinity intrusion in the Vietnamese language (article 11). In healthcare, AI-driven tools like SuSastho.ai in Bangladesh (article 12) and Kem by Nigeria’s mDoc Healthcare (article 15) are using LLMs to enable access to critical health information — whether by providing confidential adolescent health guidance or supporting maternal care through virtual coaching. Meanwhile, FillFast is using LLMs to enhance digital health record management by automating data entry, streamlining workflows, and improving interoperability between health systems (article 13).

With this compendium, we hope to contribute towards building an evidence base on responsible AI development in LMICs. We emphasise that AI solutions must be more than just technologically sophisticated — they must be ethically grounded and responsive to the realities of communities in the Majority World. By synthesising insights from both implementation and evaluation, we highlight not only the potential of LLMs in addressing socio-economic challenges but also the complexities and limitations that must be navigated. We hope that this collection serves as a foundation for future research, policy, and practice, fostering more equitable and contextually relevant GenAI development.

Finally, we would like to extend gratitude to the Gates Foundation, which made this work possible through its support and commitment towards the Grand Challenges community. We also extend a sincere thanks to the experts — Dr Rachel Adams, Dr Fola Adeleke, Dr Urvashi Aneja, Dr Rosaland Parkes-Ratanshi, Dr Leah Junck, João Victor Archegas, Mark Gaffley, and Samuel Segun — who reviewed these contributions, offering invaluable insights that strengthened the depth and rigour of this collection.