Responsible AI in Practice
Blog
/
May 2025

Responsible AI in Practice

Shivranjana Rathore

Responsible AI (RAI) is, broadly, a set of principles intended to bridge the gap between high-level ethical theory and practical, real-world AI deployment. It is premised on principles such as transparency, safety, and fairness. However, today, RAI risks getting diluted. Popularised by the AI industry itself, the vocabulary of RAI has been increasingly co-opted by Big Tech to ethics-wash AI products, policies, and innovation agendas.

At Digital Futures Lab, we see RAI differently: as a contextual and action-driven practice. A central question in actioning Responsible AI is: Is AI even the right solution for this problem?

Key Learnings from Our Work

Through our ongoing work and in-depth conversations with practitioners and researchers, we surface key insights that reveal both the possibilities and pitfalls of Responsible AI today:

1. Responsible AI begins by questioning whether AI is needed at all.

Not every problem needs an AI solution. AI-first approaches often worsen access, sustainability, or care outcomes, especially when driven by funding trends or scale pressures. Responsible practice starts by asking: Is AI truly the right solution to a problem? Sometimes, the most responsible outcome after scoping is recognising that a low-tech or non-AI solution is more equitable or that an AI intervention, while technically possible, would deepen existing harms.

2. Principles aren't enough, responsibility in AI needs clear, operational pathways.

Without concrete steps, principles like fairness and transparency risk becoming checkbox exercises or industry grandstanding. As Goodhart’s Law goes: once a principle becomes a compliance target, it loses meaning. RAI must be incorporated across the AI lifecycle — from procurement systems, documentation practices, and user testing — significantly expanding it beyond the constraints of mere policy statements.

3. Big tech's embrace of RAI has helped popularise it, but risks ethics washing.

Terms like “trustworthy AI” are often used without enforceable commitments. Big Tech embraces a language of responsibility to self-regulate and manage reputation, rather than driving genuine accountability. With no external checks, companies define and audit their own ethics, creating a conflict of interest that turns responsibility into branding and sidelines the structural changes real accountability demands.

4. Responsibility without infrastructure is responsibility without power.

Social impact organisations often want to act responsibly but face barriers: limited AI literacy, short funding cycles, and unclear regulations. Good intentions alone are not enough. Without time, resources, and policy support to design and adapt responsibly, RAI remains aspirational rather than actionable.

5. Responsible AI must be user-centred and context-grounded, not abstract.

Designers must consider purpose, data, interface, and unintended effects. Building AI responsibly means centring communities, positionalities, and downstream effects throughout design and evaluation, rather than treating context as an afterthought.

6. Responsible AI must tackle climate, labour, and systemic harms.

Responsible AI should account not just for bias or safety, but also for the environmental footprint of AI systems, the rights and well-being of gig workers, the impact on labour markets, and the normalisation of surveillance. Advocating for RAI means advocating for AI that respects all stakeholders and considers the full spectrum of ecological and social costs.

7. Voluntary principles won't hold, regulation must create real accountability.

In India and globally, voluntary ethics have shown their limits. Context-aware, nuanced regulation that allows room for innovation, seeks to serve people and the planet, and not just companies’ bottom lines, is crucial. Otherwise, the burden of ethics unfairly falls on users and practitioners, while powerful actors evade scrutiny.

For more such insights, take a look at some of our flagship projects on Responsible AI:

To receive these monthly reflections and insights directly into your inboxes, subscribe to our newsletter.

Subscribe Now