Sam Altman, CEO of OpenAI, warns that rivals may be much less concerned than OpenAI about putting guardrails on their ChatGPT and GPT-4 equivalents. Jovelle Tamayo/ for The Washington Post via Getty Images
Sam Altman, CEO of OpenAI, believes artificial intelligence has incredible benefits for society, but he is also concerned about how bad actors will use the technology.
In an interview with ABC News this week, he warned “there will be other people who don’t respect some of the safety limits we have.”
OpenAI released its AI chatbot ChatGPT to the public in late November, and this week revealed a more capable successor called GPT-4.
Other companies are racing to offer ChatGPT-like tools, giving OpenAI enough competition to worry about despite the advantage of Microsoft being a major investor.
“It’s competitive out there,” OpenAI co-founder and chief scientist Ilya Sutskever told The Verge in an interview published this week. “GPT-4 is not easy to develop…there are a lot of companies that want to do the same thing, so from a competitive point of view you can see this as a maturation of the field.”
Sutskever explained OpenAI’s decision (with security as another reason) to reveal little about the inner workings of GPT-4, leading many to question whether the name “OpenAI” still made sense. But his comments were also an acknowledgment of the many rivals trailing OpenAI.
Some of those rivals may be much less concerned than OpenAI with putting guardrails on their ChatGPT or GPT-4 equivalents, Altman suggested.
“What worries me is that we won’t be the only makers of this technology,” he said. “There will be other people who don’t follow the safety limits we put on them. Society, I think, has a limited amount of time to figure out how to respond to that, how to regulate that, how to deal with it.”
OpenAI this week shared a “system map” document that outlines how its testers deliberately tried to get GPT-4 to give dangerous information, such as how to make a dangerous chemical using basic ingredients and kitchen supplies, and how the company solved the problems before the product launch.
To prevent anyone from questioning the malicious intent of bad actors watching AI, phone scammers are now using AI voice cloning tools to sound like relatives of people in desperate need of financial help – and successfully swindle money to victims.
“I am particularly concerned that these models could be used for large-scale disinformation,” Altman said. “As they get better at writing computer code, [they] can be used for offensive cyber-attacks.”
Since he runs a company that sells AI tools, Altman has been particularly outspoken about the dangers of artificial intelligence. That may have something to do with the history of OpenAI.
OpenAI was founded in 2015 as a non-profit organization focused on the safe and transparent development of AI. It switched to a “capped profit” hybrid model in 2019, with Microsoft becoming a major investor (how much it can benefit from the scheme is limited, as the name of the model suggests).
Elon Musk, CEO of Tesla and Twitter, who also co-founded OpenAI and made a significant donation to it, has criticized this shift. take note of last monthOpenAI was created as an open source (that’s why I called it “Open” AI), non-profit company to counterbalance Google, but now it has become a closed source, maximum profit company effectively controlled by Microsoft. ”
early December, Musk called ChatGPT “scary good” and warned, “We’re not far from dangerously strong AI.”
But Altman has warned the public as much, if not more, even as he continues OpenAI’s work. Last month, in a series of tweets, he worried about “how people of the future will see us.”
“We also need enough time for our institutions to figure out what to do,” he wrote. “Regulation is going to be critical and it’s going to take time to figure out… having time to understand what’s happening, how people want to use these tools and how society can co-evolve is critical.”
Leave a Reply