ChatGPT creator Sam Altman has warned that the artificial intelligence application can throw some very bad outcomes. Altman, CEO of OpenAI which owns ChatGPT, has said that the technology comes with some real dangers and that society must be very cautious.
"We've got to be cautious here," Altman told ABC News on Thursday. "I think it doesn't work to do all this in a lab. You have got to get these products out into the world and make contact with reality. Make the mistakes when stakes are low. But all of that said, I think people should be happy that we are a little bit scared of this."
ChatGPT, an AI-powered language model, has created a sensation for its ability to generate human-like text responses to a given prompt. It can answer complicated questions, converse on a variety of topics, write essays, and compose poetry. While some are amazed by its ability to respond in some cases better than humans, some are fearful of its misuse.
Also read: ChatGPT 4 pretended to be blind to take help from user for solving CAPTCHA
Also read: India Today Conclave 2023: Microsoft President Brad Smith terms ChatGPT extraordinary tool for teachers
When asked what is the worst possible outcome, Altman said there is a set of very bad outcomes. "One thing I'm particularly worried that these models could be used for large-scale disinformation,” Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyber-attacks." He, however, said the technology could also be "the greatest humanity has yet developed".
Altman's warning comes just days after OpenAI released the latest version of its language AI model, GPT-4. Soon after the latest version was launched, Brett Winton, Chief Futurist at ARK Invest, said GPT-4's performance on human benchmarks was rather remarkable. "GPT-3.5 scored 10th percentile on the bar exam, GPT-4 hits the 90th percentile. On BC calculus it got the equivalent of a 3, good for college credit at 75% of colleges," he said, sharing a graph comparing the performance of GPT-3 with GPT-4.
Replying to the tweet, Tesla CEO Elon Musk, one of the first investors in OpenAI, said: "What will be left for us humans to do? We better get a move on with Neuralink!" In February this year, Musk warned that AI was one of the biggest risks to the future of civilization. "It’s both positive or negative and has great, great promise, great capability," Musk said at the World Government Summit in Dubai, UAE. He also stressed that "with that comes great danger".
In December last year, Musk said there was no regulatory oversight of AI, "which is a *major* problem. I've been calling for AI safety regulation for over a decade!"
But Brad Smith, Vice Chairman and President of Microsoft, which uses the GPT-4 language model on its search engine Bing, on Friday highlighted key developments one can expect to see in the future of generative AI. Speaking at the India Today Conclave 2023, Smith predicted that generative AI models will continue to get better and more powerful in their ability to reason.
These models, he said, will progress from large language models to multimodal models, meaning that they will be able to understand not just words, but also images, sound, and video, and reason them to produce content in a variety of forms. "First, we're going to see these models continue to get better. And being better means that they're going to be more powerful in terms of their ability to reason," Smith said.
Watch: PM Modi highlights 75 days of achievements in 2023 as India Today Conclave celebrates ‘The India Moment’
WATCH: Largest start-up employer to meeting Messi in Kolkata: What BYJU'S CEO said at India Today Conclave 2023
Copyright©2023 Living Media India Limited. For reprint rights: Syndications Today