The Centre on Tuesday issued amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, requiring digital platforms to clearly label AI-generated content, including deepfake videos, synthetic material and altered visuals.According to the Gazette notification dated February 10, social media platforms must now remove flagged AI-generated, deepfake or synthetic content within three hours of receiving a complaint from a competent authority or by court order.The new regulation, notified by the Ministry of Electronics and Information Technology (MeitY), will come into effect from February 20.Under the revised rules, the timeframe for resolving grievances has been shortened from 15 days to 7 days. For complaints requiring urgent action, intermediaries must respond within 36 hours, reduced from the earlier 72 hours. Additionally, platforms are required to act on specified content removal complaints within 2 hours, compared to the previous 24-hour window.The notification states: “A significant social media intermediary which enables displaying, uploading or publishing any information on its computer resource shall, prior to such display, uploading or publication, require users to declare whether such information is synthetically generated information.”Where technically possible, such content must also carry permanent metadata or provenance tools, including a unique identifier, to help identify the computer resource used to create or modify it. Intermediaries are barred from allowing these labels or metadata to be removed, hidden or altered.The notification defines “synthetically generated information” as audio, visual or audio-visual content that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a way that makes it appear real, authentic or true, and portrays any individual or event such that it may be perceived as indistinguishable from a real person or an actual event.According to the Ministry notification, social media companies will have to deploy automated tools to detect and prevent the circulation of illegal, sexually exploitative or deceptive AI-generated content.


