Labelling Illegal AI Content is Welcome, But Automated Labelling Could Lead to Censorship
India has tightened its content moderation rules, requiring platforms like Meta, YouTube and X to remove unlawful material within just three hours of being notified—down from the earlier 36-hour window. The new guidelines, which also apply to AI-generated content such as deepfakes, introduce mandatory labelling and traceability requirements to improve transparency.
While these changes aim to address the risks posed by deceptive and harmful content, experts warn they may come with trade-offs.
Research Associate, Anushka Jain, notes:
“Companies are already struggling with the 36-hour deadline because the process involves human oversight. If it gets completely automated, there is a high risk that it will lead to censoring of content.”
With limited time for review, platforms may increasingly rely on automated systems—raising concerns about over-removal, reduced human judgment, and the broader implications for free expression online.Read the original BBC article, India orders social media firms to remove unlawful content within three hours, here.