Workshop on 'Understanding and mitigating overreliance on AI' at Project Tech4Dev’s AI Cohort Program in Bangalore
Event
/
Oct 2025

Workshop on 'Understanding and mitigating overreliance on AI' at Project Tech4Dev’s AI Cohort Program in Bangalore

As the Knowledge Partner on Responsible AI for Project Tech4Dev’s AI Cohort Program, Research Associates Sasha John and Shivangi Sharan recently facilitated a workshop in Bangalore on a critical question: What happens when trust in AI tips into over-reliance?

The Program is currently supporting seven NGOs in India as they design and scale responsible AI solutions, with our team facilitating thematic workshops and offering tailored advisory building on our Responsible AI Fellowship.

At this workshop, participants were led through discussions and activities on automation bias, confirmation bias, and overestimating AI explanations — highlighting why AI literacy and friction-in-design are key for building effective, responsible systems.

You can watch the recording of the workshop below:

Workshop Summary

The session unpacked a critical, but often overlooked dimension of AI adoption – how users begin to accept AI outputs uncritically, substituting human judgment for machine-generated advice. Drawing on case studies from education, healthcare, and governance, participants explored how overreliance manifests when systems built to support decision-making start to replace independent thought and contextual reasoning.

Through scenario-based exercises, social sector organisations in the cohort examined how factors like AI literacy, domain expertise, trust, and task familiarity influence end-users' reliance on technology. Participants were asked to step into the roles of frontline workers, teachers, and entrepreneurs navigating AI-assisted tasks to assess when and why users might defer to AI. The exercise revealed that reliance decisions are shaped as much by structural pressures-time constraints, institutional mandates, or lack of alternatives, as by individual confidence or skill.

The workshop introduced the concept of "positive friction" as a strategy to counteract uncritical dependence on AI. Rather than prioritising seamless experiences, participants learned to embed deliberate pauses, "cognitive speed bumps" and "cognitive enhancements", that encourage reflection and critical engagement. From delayed chatbot responses to onboarding protocols that highlight AI's fallibility, such friction can help restore user agency and trust. For organisations deploying AI in education, health, or livelihoods, these interventions offer a path to ensure that technology augments rather than replaces human capacity for reasoning, empathy, and care.