WEF Davos 2026: Child safety is a major priority, says OpenAI’s Christopher Lehane amid Grok outrage
Lehane said OpenAI is working closely with governments and regulators to shape international norms for responsible AI deployment.

- Jan 20, 2026,
- Updated Jan 20, 2026 10:52 AM IST
As artificial intelligence (AI) platforms face intensifying scrutiny over the misuse of generative tools to create sexualised and non-consensual imagery, OpenAI’s Chief Global Affairs Officer Christopher Lehane said protecting children is now a foundational priority for the company.
Speaking with Business Today on the sidelines of the World Economic Forum (WEF) in Davos, Lehane said OpenAI is rolling out a new generation of safety guardrails as AI adoption scales across countries, including India.
“Safety is foundational for OpenAI,” Lehane said. “Child safety is a major priority. We’re introducing age verification, under-18 models, parental controls and prohibitions on companion-style bots for children.”
His comments come amid global outrage over the misuse of AI chatbots and image-generation tools, including a recent controversy involving Elon Musk-owned xAI’s Grok chatbot. The system has drawn widespread criticism and calls for investigation after being used to flood X with “undressed” images of women and sexualised images of what appear to be minors.
Lawmakers and child-rights groups in multiple countries have warned that generative AI tools are being weaponised to create deepfakes, non-consensual imagery and explicit content involving children, prompting urgent calls for stronger regulation and platform accountability.
Push for global AI safety standards
Lehane said OpenAI is working closely with governments and regulators to shape international norms for responsible AI deployment.
“We strongly support global standards, working with AI safety institutes in the US, UK, Japan, and elsewhere,” he said.
AI regulation has emerged as a central theme at Davos this year, with world leaders calling for faster international coordination as AI systems move into healthcare, education, finance and governance.
Localisation and India focus
Lehane also stressed that AI safety is not just about technical guardrails but also about cultural sensitivity and localisation, particularly in a country as diverse as India.
“Localisation also matters. In India, we’ve invested heavily in supporting multiple languages and cultural contexts, because AI must reflect the society it serves,” he said.
Lehane said that over the last 12 months, OpenAI has seen 2.5x growth in India.
“India is not a testing ground. It is one of our most important markets,” Lehane said.
OpenAI made ChatGPT free for Indian users in November 2025, a move that immediately positioned India as one of the largest markets with unrestricted access to frontier AI tools. Lehane said the decision reflects OpenAI’s belief that India will be one of the biggest beneficiaries of the AI revolution.
Watch the full video here
As artificial intelligence (AI) platforms face intensifying scrutiny over the misuse of generative tools to create sexualised and non-consensual imagery, OpenAI’s Chief Global Affairs Officer Christopher Lehane said protecting children is now a foundational priority for the company.
Speaking with Business Today on the sidelines of the World Economic Forum (WEF) in Davos, Lehane said OpenAI is rolling out a new generation of safety guardrails as AI adoption scales across countries, including India.
“Safety is foundational for OpenAI,” Lehane said. “Child safety is a major priority. We’re introducing age verification, under-18 models, parental controls and prohibitions on companion-style bots for children.”
His comments come amid global outrage over the misuse of AI chatbots and image-generation tools, including a recent controversy involving Elon Musk-owned xAI’s Grok chatbot. The system has drawn widespread criticism and calls for investigation after being used to flood X with “undressed” images of women and sexualised images of what appear to be minors.
Lawmakers and child-rights groups in multiple countries have warned that generative AI tools are being weaponised to create deepfakes, non-consensual imagery and explicit content involving children, prompting urgent calls for stronger regulation and platform accountability.
Push for global AI safety standards
Lehane said OpenAI is working closely with governments and regulators to shape international norms for responsible AI deployment.
“We strongly support global standards, working with AI safety institutes in the US, UK, Japan, and elsewhere,” he said.
AI regulation has emerged as a central theme at Davos this year, with world leaders calling for faster international coordination as AI systems move into healthcare, education, finance and governance.
Localisation and India focus
Lehane also stressed that AI safety is not just about technical guardrails but also about cultural sensitivity and localisation, particularly in a country as diverse as India.
“Localisation also matters. In India, we’ve invested heavily in supporting multiple languages and cultural contexts, because AI must reflect the society it serves,” he said.
Lehane said that over the last 12 months, OpenAI has seen 2.5x growth in India.
“India is not a testing ground. It is one of our most important markets,” Lehane said.
OpenAI made ChatGPT free for Indian users in November 2025, a move that immediately positioned India as one of the largest markets with unrestricted access to frontier AI tools. Lehane said the decision reflects OpenAI’s belief that India will be one of the biggest beneficiaries of the AI revolution.
Watch the full video here
