‘Father of AI’ Geoffrey Hinton joins global plea to ban Superintelligent AI

‘Father of AI’ Geoffrey Hinton joins global plea to ban Superintelligent AI

The appeal includes signatures from AI pioneer Geoffrey Hinton, Apple co-founder Steve Wozniak, Prince Harry, economist Daron Acemoglu, and former US National Security Adviser Susan Rice, among others. 

Advertisement
The statement warns that the unchecked pursuit of superintelligent systems could pose existential risks to humanity if not developed under strong oversight. The statement warns that the unchecked pursuit of superintelligent systems could pose existential risks to humanity if not developed under strong oversight.
Business Today Desk
  • Oct 23, 2025,
  • Updated Oct 23, 2025 9:39 PM IST

A coalition of scientists, technologists, and public figures has urged an immediate global ban on developing Superintelligent AI — systems that could surpass human cognitive abilities — until there is broad scientific consensus that they can be built safely. 

The appeal, released by the nonprofit Future of Life Institute, includes signatures from AI pioneer Geoffrey Hinton, Apple co-founder Steve Wozniak, Prince Harry, economist Daron Acemoglu, and former US National Security Adviser Susan Rice, among others. 

Advertisement

The statement warns that the unchecked pursuit of superintelligent systems could pose existential risks to humanity if not developed under strong oversight. “We need safety, control, and international cooperation before we move ahead,” the group said in its joint declaration. 

This marks the second major intervention by the Future of Life Institute. Its earlier 2023 open letter had called for a six-month pause in large-scale AI development — a plea that went largely unheeded as AI investment and deployment accelerated globally. 

The latest call comes amid an intensifying race among Big Tech firms to achieve Artificial General Intelligence (AGI) and beyond. Meta’s Mark Zuckerberg recently claimed that superintelligence is now “within reach,” while OpenAI CEO Sam Altman has suggested it could emerge by 2030. 

Notably, OpenAI employee Leo Gao has also signed the appeal — an unusual move from within a company that is itself advancing superintelligence research. 

Advertisement

With hundreds of billions of dollars pouring into AI infrastructure this year alone, the appeal underscores growing unease over whether the world is moving too fast toward technologies that may be impossible to control. 

A coalition of scientists, technologists, and public figures has urged an immediate global ban on developing Superintelligent AI — systems that could surpass human cognitive abilities — until there is broad scientific consensus that they can be built safely. 

The appeal, released by the nonprofit Future of Life Institute, includes signatures from AI pioneer Geoffrey Hinton, Apple co-founder Steve Wozniak, Prince Harry, economist Daron Acemoglu, and former US National Security Adviser Susan Rice, among others. 

Advertisement

The statement warns that the unchecked pursuit of superintelligent systems could pose existential risks to humanity if not developed under strong oversight. “We need safety, control, and international cooperation before we move ahead,” the group said in its joint declaration. 

This marks the second major intervention by the Future of Life Institute. Its earlier 2023 open letter had called for a six-month pause in large-scale AI development — a plea that went largely unheeded as AI investment and deployment accelerated globally. 

The latest call comes amid an intensifying race among Big Tech firms to achieve Artificial General Intelligence (AGI) and beyond. Meta’s Mark Zuckerberg recently claimed that superintelligence is now “within reach,” while OpenAI CEO Sam Altman has suggested it could emerge by 2030. 

Notably, OpenAI employee Leo Gao has also signed the appeal — an unusual move from within a company that is itself advancing superintelligence research. 

Advertisement

With hundreds of billions of dollars pouring into AI infrastructure this year alone, the appeal underscores growing unease over whether the world is moving too fast toward technologies that may be impossible to control. 

Read more!
Advertisement