- By Anthony Zurcher
- North America correspondent
Artificial intelligence has the awesome power to change the way we live our lives, in both good and dangerous ways. Experts have little confidence that those in power are prepared for what is to come.
In 2019, a non-profit research group called OpenAI created a software program that could generate paragraphs of coherent text and perform rudimentary reading comprehension and analysis without specific instruction.
OpenAI initially decided not to make its creation, dubbed GPT-2, fully available to the public for fear that malicious parties could use it to generate massive amounts of disinformation and propaganda. In a press release announcing the decision, the group called the program “too dangerous”.
Fast-forward three years and the possibilities of artificial intelligence have increased exponentially.
Unlike that last limited distribution, the next offering, GPT-3, was made available immediately in November. The Chatbot-GPT interface derived from that programming was the service that launched a thousand news articles and social media posts as reporters and pundits tested its capabilities — often with eye-popping results.
Chatbot-GPT scripted stand-up routines in the style of the late comedian George Carlin about the bankruptcy of the Silicon Valley Bank. It was an opinion on Christian theology. It wrote poetry. It explained the physics of quantum theory to a child as if it were rapper Snoop Dogg. Other AI models, such as Dall-E, generated images so convincing that they have sparked controversy over their inclusion on art websites.
Machines, at least to the naked eye, have achieved creativity.
On Tuesday, OpenAI debuted the latest iteration of its program, GPT-4, which it says has robust limits against abuse. Initial customers include Microsoft, Merrill Lynch and the Government of Iceland. And at this week’s South by Southwest Interactive conference in Austin, Texas — a global gathering of tech policymakers, investors, and executives — the hottest topic of conversation was the potential and power of artificial intelligence programs.
Arati Prabhakar, director of the White House Office of Science and Technology Policy, says she is excited about the potential of AI, but she also had a caveat.
“What we’re all seeing is the emergence of this extremely powerful technology. This is a turning point,” she told a conference panel audience. “All of history shows that these kinds of powerful new technologies can and will be used for good and for evil.”
Her co-panelist, Austin Carson, was more blunt.
“If you’re not over the moon in six months, I’ll buy you dinner,” the founder of SeedAI, an artificial intelligence advisory group, told the audience.
“Freaked out” is one way to put it. Amy Webb, head of the Future Today Institute and professor of business administration at New York University, tried to quantify the possible outcomes in her SXSW presentation. She said artificial intelligence could go in two directions over the next 10 years.
In an optimistic scenario, AI development is focused on the public good, with transparency in the design of AI systems and the ability for individuals to choose whether their publicly available information on the internet is included in the AI knowledge base. The technology serves as a tool that makes life easier and more seamless, as AI features on consumer products can anticipate user needs and help accomplish virtually any task.
Ms. Webb’s catastrophic scenario involves less data privacy, more centralization of power among a handful of companies, and AI anticipating user needs — and getting them wrong or at least suffocating choice.
She gives the optimistic scenario only a 20% chance.
Which way the technology goes, Webb tells the BBC, ultimately depends in large part on the responsibility with which companies develop it. Do they do so transparently, revealing and verifying the sources from which the chatbots – which scientists call Large Language Models – draw their information from?
The other factor, she said, is whether the government — federal regulators and Congress — can act quickly to create legal guardrails to guide technology advancements and prevent their misuse.
The government’s experience with social media companies – Facebook, Twitter, Google and the like – is illustrative in this respect. And the experience is not encouraging.
“What I heard in a lot of conversations was the concern that there are no guardrails,” Melanie Subin, general manager of the Future Today Institute, says of her time at South by Southwest. “There’s a sense that something needs to be done. And I think social media as a cautionary tale is what people have in mind when they see how fast generative AI is evolving.”
Read more from the BBC’s coverage of AI
Federal oversight of social media companies is largely based on the Communications Decency Act, which Congress passed in 1996, and a succinct but powerful provision in section 230 of the act. That language protected Internet companies from liability for user-generated content on their websites. It is credited with creating a legal environment in which social media companies could thrive. But more recently, it has also been criticized for allowing these internet companies to gain too much power and influence.
Politicians on the right complain that it has allowed the Googles and Facebooks of the world to censor or reduce the visibility of conservative opinions. Those on the left accuse the companies of not doing enough to prevent the spread of hate speech and violent threats.
“We have an opportunity and a responsibility to recognize that hateful rhetoric leads to hateful action,” said Michigan Secretary of State Jocelyn Benson. In December 2020, her home was the target of protests by armed supporters of Donald Trump, organized on Facebook, challenging the results of the 2020 presidential election.
She has supported Michigan deceptive practices legislation that would hold social media companies accountable for knowingly spreading harmful information. Similar proposals have been made both at the federal level and in other states, along with legislation to require social media sites to offer underage users greater protections, be more open about their content moderation policies, and take more active steps to curb online harassment. to decrease.
However, opinions are mixed about the likelihood of such a reform being successful. Big tech companies have entire teams of lobbyists in Washington DC and state capitals, as well as large coffers with which to influence politicians through campaign donations.
“Despite abundant evidence of problems with Facebook and other social media sites, it’s been 25 years,” said Kara Swisher, a technology journalist. “We’ve been waiting for legislation from Congress to protect consumers, and they’ve withdrawn their responsibility.”
The danger, Swisher says, is that many of the companies that were big players in social media — Facebook, Google, Amazon, Apple and Microsoft — are now AI leaders. And if Congress has failed to successfully regulate social media, it will be a challenge for them to take swift action to address concerns about what Ms. Swisher calls the “arms race” of artificial intelligence.
The comparisons between artificial intelligence regulation and social media are not just academic, either. New AI technology could turn the already troubled waters of websites like Facebook, YouTube and Twitter into a boiling sea of disinformation as it becomes increasingly difficult to separate posts from real people from fake – but perfectly credible AI-generated accounts.
Even if the government succeeds in issuing new social media rules, they could be meaningless in the face of a deluge of pernicious AI-generated content.
Among the countless panels at South by Southwest was one titled “How Congress is Building AI Policy from the Ground Up.” After about 15 minutes of waiting, the audience was told that the panel had been canceled because the contestants went to the wrong location.
For those who hoped at the conference for signs of people’s ability in government, it was not an encouraging development.
Leave a Reply