
Blog | Responding to Responsibility: Behind-The-Scenes of Building India’s First Responsible AI Fellowship
Why a ‘Responsible AI Lab’ and Why Now?
The use of AI to support development objectives and commercial services is likely to expand in India. To that end, investments toward building the enabling conditions for AI development, and supporting pilot projects across sectors, are growing. Recent data reveals that the rate of AI adoption in key industries across the country has reached approximately 48% in the 2023-24 financial year. Over and above that, around 75% of organisations across various sectors are considering incorporating AI into their operations within the next year.
While specific numbers on the rate of AI integration in the social sector are lacking, the need to incorporate AI in some shape or form has trickled down to organisations implementing and deploying on-ground interventions to deliver crucial services and information to marginalised communities in low-resource settings. The opportunities are immense, but so are the associated harms and risks—particularly for already marginalised or vulnerable communities who are either misrepresented and/or excluded from crucial data required to train AI models.
Digital Futures Lab (DFL) sees the Responsible AI Lab (RAIL) as an opportunity to provide a focused space for insight exchange, knowledge production, and collaboration-building—to both, shift the narrative around and guide the trajectory of AI innovation in India toward an ethic couched in value, responsibility, and equity. The RAIL fellowship is the inaugural initiative out of the Lab with a specific focus on those programmatic interventions aimed at the most marginalised. It is a sincere and dedicated effort toward this larger vision.
As Project Coordinator of the RAIL Fellowship, I reflect on the last couple of months of conceptualising, planning, and curating the programming for the first iteration of the fellowship in this blog post.
Going the Fellowship Route
The idea to consolidate around a fellowship model for this first initiative out of RAIL stemmed from our ongoing work with the Africa and Asia grantees of the Grand Challenges AI (GCAI) initiative, funded by the Bill and Melinda Gates Foundation. Over the past year, in partnership with Research ICT Africa, DFL has been working closely with the GCAI grantees to provide advisory services on specific ethical, methodological, technical, or gender concerns that have manifested throughout their AI journeys.
While the idea of RAIL—as a platform/lab/hub—had already been formed, we gained valuable insights through our work with the GCAI grantees. We saw firsthand the impact of thoughtfully curated advisory services on some of the grantees. Our research also revealed that AI was fast becoming a crucial tool for social sector organisations. Additionally, we recognised that no one in India was focusing specifically on Responsible AI. Given these factors, we decided to create a streamlined and focused initiative, and the fellowship model was the perfect fit.
Building a Cohort and Programmatic Themes
When we put out the call for applications, we grappled with a few questions:
- What were the sorts of organisations that would be interested in a program like this?
- How far along their AI journey would they be?
- Realistically, who could we actually cater to, if everyone was at different stages and working in different sectors?
What we understood early on was that we needed to create a cohort of organisations that
- would learn from each other (so it was necessary that they weren’t all at the same stage in their AI journey and were engaging with challenges and strengths that were complementary to each other) and that,
- were more or less aligned on why they needed to take the ‘responsible’ in ‘Responsible AI’ seriously.
It was through our own discretion and preliminary conversations with selected organisations that we were able to pinpoint some overarching themes in the challenges and opportunities (listed below) that they were most concerned with. We realised pretty quickly that there is no science in planning a curated fellowship from scratch; it entailed a lot of listening and parsing out meaning where it was not explicitly clear.
Where We Are Now
After virtually kicking off the RAIL fellowship mid-July, we hosted an in-person workshop in Goa for representatives of the fourteen organisations that make up the fellowship’s inaugural cohort.
We envisioned the 2-day workshop as a space for RAIL fellow organisations to engage in concentrated and candid peer-exchange sessions on the themes relevant to their current journeys with AI integration. Covering a range of sectors like healthcare, agriculture, social welfare and advocacy, digital empowerment, education, law and justice, and community media, RAIL fellows discussed their respective approaches, questions, challenges, and tips on responsible data practices, deploying AI interventions at scale, communicating AI to their end users, and operations and planning around AI. An example of a challenge that came up quite repeatedly amongst all organisations was the complexity of defining and thus obtaining “informed consent” for data collection and retention. Some offered that even with extra information on the intervention and the ways in which their data will be used, it makes no material difference to an end-user’s willingness to opt in or out of an intervention - as long as they receive the information or service they so critically require. Yet others offered that “informed consent” at one touchpoint in the intervention’s journey did not meant consent, informed or otherwise, at another.
We knew this going in but it was very apparent that there is already such a wealth of knowledge in the room; these fellows are experts on their own lived and programmatic experience. We were merely there to facilitate conversation in such a way that brought that out. We started each peer exchange session with interventions by two organisations that we felt could speak to a particular theme most closely, and then opened the floor up for questions and opportunities for Fellows to echo or resonate with each other.
Conversations veered off in illuminative tangents, revealing that social sector organisations have been grappling with and determinedly pursuing solutions to these conundrums even outside of the AI question. What came out quite clearly is that, on an individual level, these organisations are doing important work and attempting to build the infrastructure and systems for their specific AI-based interventions within their organisations. But this may come, albeit understandably, at the expense of an ecosystem-level vision of what is necessary and pertinent to enable these individual interventions to work in tandem with each other. Identifying these opportunities to build collective, ecosystem-level systems and protocols around AI integration and deployment—and the stakeholders necessary to do this—will be an important step in enabling a less fragmented approach to crucial service and information delivery to India’s most vulnerable.
What Comes Next
To help with exercising the muscle necessary for this more zoomed-out thinking, we engaged the group in a futuring activity to consider the first and second-order consequences of LLM-based chatbots in the healthcare sector. They explored the resulting large and small-scale implications for medical research, the healthcare ecosystem, and the lifespan of the average Indian.
The aim is to use the learnings and insights from this 2-day workshop to better curate the rest of the programming for the RAIL fellowship, which will continue until November 2024. This entails crafting the 10-piece lecture series, delivered by the RAIL expert panel, and collating insights from the workshop and post-lecture discussions for RAIL mentors, so that they may directly address the challenges and opportunities voiced by the cohort through their 1-on-1 mentoring sessions and subsequent lectures. Some of these lecture topics include “Addressing Bias in Different AI Models”, “UI for AI-based Interventions” and “Ethical and Regulatory Considerations for Scaling AI-based Interventions”.
We also have a few knowledge products in the works that pull on existing resources, tools, and insights from the community and offer new ones—for the larger ecosystem that is shaping AI innovation in India.
💁🏽 For any further queries, please write to RAIL’s Project Co-ordinator, Sasha John, at sasha@digitalfutures.in