70% of organisations are actively tracking AI regulations and preparing for compliance. 
70% of organisations are actively tracking AI regulations and preparing for compliance. In recent months, enterprises have been rapidly integrating artificial intelligence (AI) tools and technologies to empower the workforce. However, as adoption accelerates, so do the cybersecurity risks associated with AI. While businesses are very well aware of these threats, a significant gap remains between policy and real-world implementation.
According to the Sprinto CISO Pulse Check 2026 report, nearly one in three organisations has been a victim of a major AI-related security incident in the past year, highlighting a critical "execution gap" in adopting frontier technology.
Must read: BT explainer: Why Anthropic’s Claude Mythos could become a major security risk?
AI governance lags, despite awareness
Among organisations affected by security breaches, many have identified threats such as shadow AI usage, model inversion, and data poisoning. However, the study revealed that over 70% of organisations are actively tracking AI regulations and preparing for compliance.
However, companies are treating AI-related threats as unique to require dedicated policies, monitoring, and controls. But they do not handle them as part of the overall security. Implementation is the major issue where companies are lagging.
Must read: OpenAI launches GPT 5.4 Cyber to boost AI-powered cybersecurity, eyes lead over Claude Mythos
Raghuveer Kancherla, Co-founder of Sprinto, said, “AI has moved faster than most organisations were prepared for. The companies that win in 2026 will not be the ones adopting AI fastest, but the ones building trust, control, and resilience at the same speed.”
Investment rises as governance evolves
While there’s a huge gap in proper execution, companies are beginning to act. The study highlighted that around 69% of organisations have already set budgets for AI risk management in 2026, with another 17% planning to follow.
Furthermore, organisations should prioritise implementing technical controls, conducting AI risk assessments, and training employees on safe AI usage.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine