Should India have a child safety solutions observatory? Is it high time we devise a youth advisory council? Is there a need to look at child well-being more than just child safety? These were some of the pertinent questions raised at the session titled ‘Safeguarding Children in India’s AI Future: Towards Child-Centric AI Policy and Governance’ on Tuesday at the AI Impact Summit being held in the national capital.”There is a need for a nuanced approach when it comes to child well-being and artificial intelligence (AI). We cannot escape the fact that AI is a paradoxical entity with a fair set of limitations. Even though we know it has many harms, we choose to integrate it with a child’s education and security. Thus, there is an urgent need to create benchmarks in this field,” Maya Shermon, Senior Tech and Innovation Adviser, Embassy of Israel, said.”In Israel, we have a multi-stakeholders unit comprising the police, government and private company, like ‘Unit 105’, for dealing with crime against children, especially in online spaces. We need to create fast cycles of change if we want to deal with cybercrime against children,” she added.Uthara Ganesh from Snapchat batted for building a repository of best solutions and practices to safeguard children on the digital platform.”India needs to wake up to the idea of building a child safety solutions observatory which can set the narrative for the Global South. Today, there are many kids who say they would rather discuss their problem with an AI chatbot rather than their peers or parents due to fear of judgmental response. So, we need to bring in the voices of the youth in the AI ecosystem,” she said.Atish Gonsalves from Lego Education, however, underlined the need to teach AI literacy to kids. “We clearly state to our young learners that AI is not a friend, it is just a tool. Generative AI can be made safer for kids. We do not encourage building any unhealthy emotional bonds with any kind of technology. Child safety is non-negotiable,” he said.Deliberating upon the need to have strict laws to deal with AI-generated crime, advocate NS Nappinai from Cyber Saathi said a law which cannot be enforced should not be enacted. “Many guardrails exist in the digital domain, but many times, the government or policy-makers have to sit at the fence and see when to intervene. This is simply because a law which cannot be enforced should not be enacted. Law is never an impediment. It should rather help you to see the digital space as a safe space. At present, India does not have AI laws, but we have regulations, and we need to go a long way,” she added.


