‘Earth cannot afford today’s AI...’: Sridhar Vembu hails Sarvam AI’s low-compute rath
Bengaluru-based Sarvam AI has emerged as a key player in India’s ambition to transition from being merely an AI consumer to an AI builder, developing foundational models tailored to Indian languages and governance use cases.

- Feb 14, 2026,
- Updated Feb 14, 2026 5:01 PM IST
A debate over the true cost of artificial intelligence is gaining momentum after Sridhar Vembu, founder and former CEO of Zoho, pointed to Sarvam AI as proof that advanced AI can be built with far lower computing and energy intensity.
In a recent post on X (formally twitter), Vembu argued that Sarvam AI has demonstrated that “world-class AI can be done much more affordably and sustainably with much lower energy and compute footprint,” adding that future software code-generation systems must prioritise efficiency because “the earth cannot afford today’s AI energy footprint.”
His comments come amid a broader global reassessment of AI’s infrastructure demands, particularly as hyperscalers expand power-hungry data centres to train and deploy large models.
India’s 'sovereign AI' push gains traction
Bengaluru-based Sarvam AI has emerged as a key player in India’s ambition to transition from being merely an AI consumer to an AI builder, developing foundational models tailored to Indian languages and governance use cases.
The startup grew out of the AI4Bharat research ecosystem and has attracted early-stage funding to scale research, model development, and deployment. Its focus has been on building domain-specific AI systems optimised for India’s linguistic diversity and public-service workflows rather than replicating massive general-purpose global models.
Among its recent developments are tools designed for:
- Multilingual document understanding and OCR for complex Indian-language records.
- Speech and text models capable of supporting multiple regional languages.
- AI infrastructure tuned for lower compute requirements while maintaining performance for targeted applications.
These systems are positioned to address domestic administrative and linguistic challenges while reducing reliance on foreign AI platforms, aligning with a broader national strategy to build indigenous digital infrastructure.
Analysts say such localisation, optimising models for specific languages and use cases rather than building ever-larger general systems, can significantly reduce computational overhead, reinforcing Vembu’s thesis of “lean AI.”
Other side of the AI boom: Looming energy surge
While startups like Sarvam argue for efficiency-first architectures, global trends show AI adoption rapidly increasing electricity consumption.
Data-centre electricity use worldwide is projected to nearly double by 2030 as companies scale infrastructure to support generative AI, cloud computing, and high-performance training clusters. AI-optimised servers are expected to account for a large share of this increase, driven by specialised chips running continuously at high utilisation.
This surge is already reshaping power markets, with utilities and grid operators in several countries preparing for unprecedented demand from AI-driven data centres and negotiating dedicated energy supply arrangements for large technology firms.
Why AI consumes so much power
Unlike traditional software workloads, modern AI systems require:
- Massive parallel computing clusters for training large models.
- High-density accelerator chips operating continuously.
- Energy-intensive cooling systems to manage heat from GPUs.
- Frequent retraining and inference cycles that multiply compute usage at scale.
Researchers warn that AI expansion is significantly increasing electricity consumption, water usage, and carbon emissions associated with data-centre operations, prompting calls for efficiency-focused design and cleaner energy integration.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
A debate over the true cost of artificial intelligence is gaining momentum after Sridhar Vembu, founder and former CEO of Zoho, pointed to Sarvam AI as proof that advanced AI can be built with far lower computing and energy intensity.
In a recent post on X (formally twitter), Vembu argued that Sarvam AI has demonstrated that “world-class AI can be done much more affordably and sustainably with much lower energy and compute footprint,” adding that future software code-generation systems must prioritise efficiency because “the earth cannot afford today’s AI energy footprint.”
His comments come amid a broader global reassessment of AI’s infrastructure demands, particularly as hyperscalers expand power-hungry data centres to train and deploy large models.
India’s 'sovereign AI' push gains traction
Bengaluru-based Sarvam AI has emerged as a key player in India’s ambition to transition from being merely an AI consumer to an AI builder, developing foundational models tailored to Indian languages and governance use cases.
The startup grew out of the AI4Bharat research ecosystem and has attracted early-stage funding to scale research, model development, and deployment. Its focus has been on building domain-specific AI systems optimised for India’s linguistic diversity and public-service workflows rather than replicating massive general-purpose global models.
Among its recent developments are tools designed for:
- Multilingual document understanding and OCR for complex Indian-language records.
- Speech and text models capable of supporting multiple regional languages.
- AI infrastructure tuned for lower compute requirements while maintaining performance for targeted applications.
These systems are positioned to address domestic administrative and linguistic challenges while reducing reliance on foreign AI platforms, aligning with a broader national strategy to build indigenous digital infrastructure.
Analysts say such localisation, optimising models for specific languages and use cases rather than building ever-larger general systems, can significantly reduce computational overhead, reinforcing Vembu’s thesis of “lean AI.”
Other side of the AI boom: Looming energy surge
While startups like Sarvam argue for efficiency-first architectures, global trends show AI adoption rapidly increasing electricity consumption.
Data-centre electricity use worldwide is projected to nearly double by 2030 as companies scale infrastructure to support generative AI, cloud computing, and high-performance training clusters. AI-optimised servers are expected to account for a large share of this increase, driven by specialised chips running continuously at high utilisation.
This surge is already reshaping power markets, with utilities and grid operators in several countries preparing for unprecedented demand from AI-driven data centres and negotiating dedicated energy supply arrangements for large technology firms.
Why AI consumes so much power
Unlike traditional software workloads, modern AI systems require:
- Massive parallel computing clusters for training large models.
- High-density accelerator chips operating continuously.
- Energy-intensive cooling systems to manage heat from GPUs.
- Frequent retraining and inference cycles that multiply compute usage at scale.
Researchers warn that AI expansion is significantly increasing electricity consumption, water usage, and carbon emissions associated with data-centre operations, prompting calls for efficiency-focused design and cleaner energy integration.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
