India charts its own course on AI regulation
Urvashi recently spoke to DW for their report on India's latest AI guidelines, an approach different from the EU, China and the US. The guidelines comes as New Delhi prepares to host the IndiaAI summit at the start of 2026. The guidelines advocate using existing legal frameworks — like the Information Technology Act and the Digital Personal Data Protection Act — to handle emerging risks such as deepfakes and unauthorised data use.
Read some of the excerpts from the report below:
Working within existing laws
Urvashi Aneja pointed out that the guidelines do not flesh out how India's AI objectives will be achieved.
"...how is data usability and sharing going to be enhanced? Improving access to public sector data to spur innovation has been a long-standing goal but data quality continues to be poor".
She also said there is a patchy understanding of the risks involved.
This narrow view on AI risks mean the guidelines are silent on issues such as labour displacement, psychological, and environmental harms. Equally glaring is the absence of any discussion on market concentration.
A core element of the guidelines is the "Do No Harm" principle, which officials hope will manage AI risks effectively without stifling technological growth.
Another important point is content authentication to help distinguish authentic content from AI-produced or modified material.
The government has already proposed amendments to existing IT rules that require social media platforms like YouTube and Instagram to visibly label AI-generated or synthetically modified content. Visual labels are to cover at least 10% of the display area, while audio content must include audible labels for at least 10% of the duration.
These measures aim to enhance transparency and help users distinguish authentic from AI-manipulated content, making them less susceptible to misinformation and deepfakes.