
DFL Newsletter Issue #23: A New Themed Section, Our New Website, Provocations on AI Sovereignty & More!
Spring is here, and DFL is in full bloom!
We have been quoted in media pieces on AI, have new work out on ideal feminist digital futures and the impact of AI’s growth on mineral extraction and waste generation, and are hosting an event in April to launch our playbook on Responsible AI!
We have also introduced a new themed section to our monthly newsletter! Inspired by Women’s History Month, in March, we’re looking at the importance of adopting a holistic approach to gender-based considerations in tech policy and research.
Read more below!
research 📑
DFL is excited to publish its latest report, ‘Provocations on AI Sovereignty: Confronting Complexities & Shaping Future Strategies’, produced with support from Samagata Foundation. With AI sovereignty becoming a global priority, nations are shaping policies to secure control over AI infrastructure, data, and innovation. This report, developed from a workshop with 18 of India’s leading thinkers and practitioners, unpacks the debates on AI sovereignty, its stakes for India, and the broader policy and governance challenges it raises.

In partnership with Point of View, DFL published ‘Feminist Futures Now’ - a report documenting feminist reimaginations of digital rights from an ‘unconference’ held in August 2024. The report brings to life five fictional narratives rooted in real-world experiences of marginalised communities—stories of privacy, access, violence & safety in digital spaces that challenge the mainstream digital rights discourse. Feminist Futures Now highlights the need for inclusive policies on privacy, AI ethics, online safety, and free expression towards gender-equitable digital futures.

In the 6th edition of the Code Green newsletter, we cover the material aspects of mineral extraction, manufacturing, and waste generation as part of the AI lifecycle. Given broader narratives around the geopolitics of mineral supply chains and the long-term environmental and local impacts of extraction and processing, policymakers, investors, and the public must consider the connection between AI’s growth and its reliance on critical minerals. Even if renewable energy is touted to change the game when it comes to powering the next generation of AI, that transition will not be without costs. Read how here.

In the 6th episode of the Code Green podcast, we dig into the raw materials powering AI—from rare earth mining to data centres sucking up water in drought-prone regions. Experts Tom Özden-Schilling and Tamara Kneese reveal the true cost of AI’s rapid expansion and why the conversation on sustainability must move beyond carbon footprints to the messy realities of global supply chains.
events 🎤
DFL was at RightsCon in Taipei this year! Sr. Research Associate Dona Mathew organised and moderated a panel on navigating GeoAI for climate action. Together, the speakers explored how geospatial data and AI can be applied to climate action initiatives in LMICs, risk mitigation strategies and the importance of strategic collaboration for safe and equitable GeoAI. Research Associate Anushka Jain also co-organised a session with Point of View to understand how sex workers in South Asian countries exist online.

Research Associate Sasha John was at Project Tech4Dev’s ‘AI for Global Development’ Sprint in Bangalore the first week of March, where she spoke about ‘Speculative Friction’ - a project harnessing the powers of anticipatory foresight and creative storytelling to socialise nuanced narratives on the unintended impacts of GenAI in India. In collaboration with Quicksand and supported by Rohini Nilekani Philanthropies, the project will result in seven long-form illustrated stories, with accompanying commentary, to provoke critical thinking on what our futures might look like if GenAI innovation continues in its current form.

media ✍🏽
- DFL’s Founder and Director, Urvashi Aneja co-edited the AI and Majority World section of the newly launched ‘Oxford Intersections on AI in Society’, alongside Fola Adeleke, Leah Davina Junck, and Rachel Adams, with Philipp Hacker as editor-in-chief. This collection brings together global experts exploring themes from AI and decoloniality to African perspectives on AI ethics.
- Sr. Research Manager Aarushi Gupta was quoted in this BOOM Live piece on the importance of fixing biases in AI training datasets, stressing the need to understand types of biases and how to address them accordingly.
- Anushka was quoted in The Quint, in a piece on malicious actors using GenAI to spread viral celebrity misinformation. She points out that while misinformation itself is not new, the barriers to creating material to spread it have been reduced considerably, thanks to GenAI.
- In Medianama’s latest piece on the risks of AI in Indian courts, Dona remarks that the use of GenAI in the legal space in India is inevitable given the overwhelming workload and pressure to bring in revenue - but that human oversight should not be taken lightly.
- Urvashi was interviewed by ENSURED on the costs and opportunities of AI for climate action. She talks about our AI + Climate Futures in Asia and Code Green projects, the need for policymakers to be more discerning about advocating AI to mitigate climate action, the criticality of foundational science in decision-making and bottom-up approaches to data collection.
March Spotlight: Gender and Technology 🔦
Technology isn’t neutral—it reflects and reinforces power structures. In India, gender intersects with caste, class, and geography, yet is treated as an afterthought rather than an essential component of policymaking, governance, and product design. Tech policy and governance must be gender-responsive to prevent exclusion. At Digital Futures Lab, our research reveals systemic gendered impacts, demanding deeper integration beyond tokenism or siloed approaches.
Key Learnings from Our Work
1. Access to technology has improved for women, but does not guarantee autonomy.
While more women in India can now access mobile phones, this does not always translate to digital agency. Many rely on male family members to set up accounts, make online transactions, or decide what content they can engage with. With that also comes the risks of surveillance and monitoring. Digital inclusion must go beyond device ownership to ensure meaningful participation and autonomy.
2. AI needs context to avoid reinforcing gender bias.
Our research on Indian language large language models (LLMs) found that AI systems trained on biased datasets amplify stereotypes, misrepresent gender roles, and exclude non-binary identities. Without deliberate interventions in dataset curation, model evaluation, and structural awareness, AI will continue to reproduce these biases.
3. More digital access, more digital harm.
Women—especially those in public-facing roles—face relentless harassment and deepfake abuse. Existing legal frameworks struggle to keep pace with digital harms, leaving them with little recourse. Stronger protections, platform accountability, and survivor-centred solutions are critical to moving beyond the conventional protectionist approach to safety, which often turns into another form of surveillance.
4. AI tools can help with climate action and public health but must consider gender risks.
AI is shaping climate action and public services but often ignores gendered access. Tools for conservation, disaster response, and agriculture assume a “neutral” user, overlooking vulnerabilities. For example, AI wildlife cameras have recorded women in forested areas, leading to privacy violations. Gender-aware AI design is crucial for ethical deployment.
5. AI is changing the workplace, but without intervention, it risks pushing women out.
GenAI's adoption threatens entry-level tasks like secretarial work, often held by women, risking workforce decline. Meanwhile, care and labour-intensive roles may gain value but remain undervalued and exploited. Addressing these shifts is crucial to ensuring gender equity in an evolving job market.
Here are some of the key projects that have shaped our understanding so far:
- Feminist Futures Now – Highlighting inclusive policies on privacy, AI ethics, online safety, and free expression toward gender-equitable digital futures.
- From Code to Consequence: Interrogating Gender Biases in LLMs in India – Researching gender bias in AI development and deployment, testing multiple Indian language LLMs, and developing a user-testing guide for developers.
- Responsible AI Fellowship – India’s first capacity-strengthening program on Responsible AI, mentoring social impact organizations on integrating responsible AI principles.
- AI + Climate Futures in Asia & Code Green - Cutting through the hype around AI & Climate Action in Asia, surfacing the latest scientific research & expert insights.
- Future of Work - Unpacking the impact of rapid GenAI adoption on India’s future of work to identify key levers of change and policy pathways towards inclusive labour futures.