From Principles to Practice: Responsible AI and Healthcare in India
AI is transforming healthcare in India, with applications ranging from clinical decision support and diagnostics to drug discovery and public health interventions. AI-enabled tools can transform overburdened healthcare systems by improving patient health outcomes while reducing costs and administrative burdens.
However, integration of AI into healthcare in India comes with significant risks, including:
- Bias in AI models, leading to unfair or unsafe outcomes.
- Lack of transparency, making it difficult to understand how AI decisions are made.
- Data privacy concerns, particularly around patient information.
- Accountability gaps, with no precise mechanisms to address AI failures.
In addition, AI risks vary across different use cases and stakeholders, making it essential to develop a structured, context-specific approach to responsible AI governance. Existing guidelines, such as those from the Indian Council of Medical Research (ICMR), provide high-level ethical principles but lack clear, practical and actionable guidance on risk assessment and mitigation.
Through this project, we have built a granular, adaptable risk assessment framework and suggest mitigation strategies that can be integrated within the Indian healthcare system. We did this by:
- Mapping AI risks across different healthcare use cases and subsystems
- Consulting key stakeholders, including clinicians, policymakers, AI developers, and public health institutions.
- Developing a structured, actionable framework to guide responsible AI deployment
- Providing best practices and risk mitigation strategies for healthcare institutions and AI developers
As part of this project, we developed a practical risk assessment and mitigation tool to help clinicians and healthcare organisations in India responsibly procure and deploy AI-based screening devices:
AI Risk Assessment and Mitigation Tool (Please make a copy before use.)
We are also building a Developer's Handbook for AI-based screening devices, to enable the demonstration of RAI in practice. Please reach out to harleen@digitalfutureslab.in or shivangi@digitalfutureslab.in for more information.
AI and Healthcare in India
This project aims to develop a context-specific, practice-oriented risk assessment framework for AI in Indian healthcare. In collaboration with the Centre for Responsible AI (CeRAI), IIT-Madras, we seek to create a structured approach that helps medical institutions and professionals navigate AI risks effectively.
Bridging the gap between high-level AI governance principles and practical implementation, this initiative will contribute to the development of a safer, more accountable, and inclusive AI-driven healthcare ecosystem in India.
This is a 12-month project, starting in February 2025. Findings and outcomes will be shared over time.
Project Lead: Urvashi Aneja
Research Manager: Harleen Kaur
Research Associate: Shivangi Sharan; Aayushi Vishnoi
Research Support: Aarushi Gupta
Communications: Shivranjana Rathore





