‘SEBI for AI’: Sanjeev Sanyal proposes new regulatory architecture to prevent catastrophic AI failures
Citing last year’s massive outages triggered by a cloud service update, he highlighted how even static code failures can bring airports and ATMs to a standstill. “Now imagine a black-box AI system failing — with no human fully understanding what went wrong,” he said.

- Jul 28, 2025,
- Updated Jul 28, 2025 1:59 PM IST
In a bold call for immediate global action on AI regulation, Sanjeev Sanyal — Member of the Prime Minister's Economic Advisory Council — advocated for a financial-markets-inspired oversight model to govern artificial intelligence.
Drawing parallels with the Securities and Exchange Board of India (SEBI), Sanyal proposed the creation of an independent regulatory body equipped with mechanisms like circuit breakers, explainability audits, and human-in-the-loop overrides to pre-empt AI-driven systemic risks.
Speaking on a podcast, Sanyal warned that AI has already become deeply embedded in everything from banking to cybersecurity. “You already are a victim of it,” he noted, pointing out how AI systems now interact autonomously across sectors. “It’ll all work perfectly until it blows up — and when it fails, the breakdown will be catastrophic.”
Citing last year’s massive outages triggered by a cloud service update, he highlighted how even static code failures can bring airports and ATMs to a standstill. “Now imagine a black-box AI system failing — with no human fully understanding what went wrong,” he added.
Rejecting both the 'laissez-faire' US approach — relying on post-facto lawsuits — and the bureaucratic EU model of categorising AI risk levels, Sanyal called for a third path rooted in complexity theory, which assumes unpredictability as a feature — not a bug — of adaptive systems.
“Do I need to know where a share price will go in the future to regulate the stock market? No. So why assume we can predict AI’s trajectory?” he said, arguing for predictability-agnostic regulation. His proposed tools include:
- Manual override circuit breakers
- Mandatory explainability audits
- Siloing AI systems across sectors (e.g., separating financial AI from infrastructure AI)
- Firewalls between AI domains to prevent cascading failures
Sanyal warned against building an “Internet of AI Things” where a single failure could disrupt transport, power, and finance all at once.
He also emphasised India’s need to act independently: “Even foreign companies operating in India should be required to explain how their AI systems function.”
A global summit on AI is expected to be hosted in India next year. Sanyal hopes it will reignite serious debate: “Right now, everyone’s dazzled by LLMs. AI regulation has been completely sidelined. We must act before a catastrophic failure forces our hand.”
He concluded by urging policymakers, especially in the few countries leading AI development, to engage in meaningful dialogue — before it’s too late.
In a bold call for immediate global action on AI regulation, Sanjeev Sanyal — Member of the Prime Minister's Economic Advisory Council — advocated for a financial-markets-inspired oversight model to govern artificial intelligence.
Drawing parallels with the Securities and Exchange Board of India (SEBI), Sanyal proposed the creation of an independent regulatory body equipped with mechanisms like circuit breakers, explainability audits, and human-in-the-loop overrides to pre-empt AI-driven systemic risks.
Speaking on a podcast, Sanyal warned that AI has already become deeply embedded in everything from banking to cybersecurity. “You already are a victim of it,” he noted, pointing out how AI systems now interact autonomously across sectors. “It’ll all work perfectly until it blows up — and when it fails, the breakdown will be catastrophic.”
Citing last year’s massive outages triggered by a cloud service update, he highlighted how even static code failures can bring airports and ATMs to a standstill. “Now imagine a black-box AI system failing — with no human fully understanding what went wrong,” he added.
Rejecting both the 'laissez-faire' US approach — relying on post-facto lawsuits — and the bureaucratic EU model of categorising AI risk levels, Sanyal called for a third path rooted in complexity theory, which assumes unpredictability as a feature — not a bug — of adaptive systems.
“Do I need to know where a share price will go in the future to regulate the stock market? No. So why assume we can predict AI’s trajectory?” he said, arguing for predictability-agnostic regulation. His proposed tools include:
- Manual override circuit breakers
- Mandatory explainability audits
- Siloing AI systems across sectors (e.g., separating financial AI from infrastructure AI)
- Firewalls between AI domains to prevent cascading failures
Sanyal warned against building an “Internet of AI Things” where a single failure could disrupt transport, power, and finance all at once.
He also emphasised India’s need to act independently: “Even foreign companies operating in India should be required to explain how their AI systems function.”
A global summit on AI is expected to be hosted in India next year. Sanyal hopes it will reignite serious debate: “Right now, everyone’s dazzled by LLMs. AI regulation has been completely sidelined. We must act before a catastrophic failure forces our hand.”
He concluded by urging policymakers, especially in the few countries leading AI development, to engage in meaningful dialogue — before it’s too late.
