
Beyond Semantics: Examining Gender Bias in LLMs Deployed within Low-resource Contexts in India
Like many other countries, India has witnessed a surge in LLM-based applications and initiatives, with multiple players directing their efforts towards building and customising LLMs more suited to the country’s socio-demographic characteristics. However, concomitant to the rise in the popularity of LLMs, numerous studies and anecdotes highlighting various harms such as misinformation, discrimination, and bias have emerged.
Gender bias in LLMs – defined as the tendency of these models to reflect and perpetuate stereotypes, inequalities, or prejudices based on gender – has received significant scholarly attention in the last few years.
However, only a handful of studies have analysed this issue against the backdrop of India’s sociocultural setting, and almost none (to the best of our knowledge) have looked at it in relation to critical social sectors.
We therefore undertake an exploratory study to understand the different sources and manifestations of gender biases in LLMs customised for Indian languages and deployed in resource-constrained settings.
Through detailed key informant interviews with LLM application developers, field visits to deployment and testing sites, prompting exercises, and expert workshops, we unpack gender-related concerns that emerge at each stage of the LLM lifecycle. In this, we shift away from a narrow construct that views gender bias in LLMs solely as prejudiced semantics fixable through improved engineering. Instead, we recognise it as a reflection of broader, structural inequities that demand a more grounded, interdisciplinary effort to both understand and address.
This paper is based on our previous work, From Code to Consequence and was presented by Senior Research Manager, Aarushi Gupta at the ACM FAccT Conference held in Athens, Greece in June 2025.