Panel: Soundmind: Companion AI & the Future of Care
Held on October 29, 2025, this is the third public panel in the Human in the Loop series.
Building on the comic, Soundmind, that follows Priya, a visually-impaired food vlogger, whose AI-enabled assistive tech expands her autonomy, only to eventually commandeer her every move, thought, and feeling, the panel explored the future of care and companion AI, if technology designed to reduce friction deepens dependency and encourages withdrawal?
The conversation was moderated by Urvashi Aneja and the panellists were:
- Arjun Kapoor, Centre for Mental Health Law & Policy, ILS, Pune
- Bishakha Datta, Point of View
- Natasha Joshi, Rohini Nilekani Philanthropies
- Nighat Dad, Digital Rights Foundation
Watch the recording of the panel here and read the summary below:
Panel Summary
Soundmind: Companion AI and the Future of Care explored how emerging AI systems are mediating emotional and psychological well-being. The discussion sought to unpack the ethics and politics of care technologies.Panellists reflected on how "AI companions" and mental health bots reshape intimacy, privacy, and the boundaries of professional care. While these tools promise access and companionship, they also reproduce cultural biases and extractive data practices. The conversation highlighted the need for collective frameworks of accountability that centre empathy, transparency, and community support, reminding us that meaningful care cannot be outsourced to code.
Urvashi Aneja opened the discussion by situating AI companions within a wider ecosystem of care work-traditionally undervalued, often feminised, and increasingly technologised. She set the tone of the conversation by asking what happens when emotional labour is delegated to machines, and who defines what "care" means in this context?
Natasha Joshi approached the question through the lens of digital ethics and design. She noted that AI companions blur distinctions between tool and relationship, producing new forms of dependency that are hard to regulate. Natasha cautioned against framing AI companionship as a solution to loneliness, arguing that loneliness is a structural condition exacerbated by social fragmentation, not a technological gap. She called for governance models that foreground human dignity, consent, and the right to disconnect.
Arjun Kapoor, speaking from his experience as a mental health practitioner, explored the tension between accessibility and authenticity in AI-mediated care. He pointed out that while chatbots and virtual companions can reduce stigma and make first-level mental health support more widely available, they risk oversimplifying complex emotional realities into pattern recognition and affective mimicry. Arjun emphasised that therapeutic care depends on attunement which is an inherently human process of co-regulation that machines can simulate but not embody.
Bishakha Datta widened the lens to include cultural and gendered politics of care technologies. She argued that many AI-driven wellness apps and bots reproduce patriarchal and heteronormative biases in their design, often deciding what constitutes "healthy" emotion or acceptable intimacy. Bishakha highlighted the need to rethink care not as a private transaction between user and app, but as a collective and relational act grounded in trust and interdependence.
Nighat Dad brought in the perspective of digital rights and cross-border governance. Drawing from her work representing the Global South on Meta’s Oversight Board as well as on-ground work with communities on TFGBV, she underscored that AI mental health tools often operate in regulatory grey zones, collecting sensitive data with little oversight. Nighat called for robust privacy protections and region-specific frameworks that safeguard users' autonomy, especially in contexts where legal literacy and digital safety mechanisms are weak.
Across the conversation, a shared insight emerged: AI can simulate empathy, but it cannot replace it. The panel agreed that while AI companions might complement human care, they cannot resolve the systemic inequities and social isolation that underlie contemporary mental health crises.
Closing the session, Urvashi reflected on Soundmind as both a provocation and a prompt for public imagination. The story, like the discussion, invites us to ask not only what AI can do for care but what kind of care we want to preserve in an algorithmic world.