OpenAI CEO urges regulation of technology


Sam Altman, CEO of the San Francisco startup OpenAI that developed ChatGPT, his company's chatbot tool, on Tuesday urged members of a Senate Judiciary subcommittee hearing to regulate artificial intelligence.
"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said.
"My worst fears are that we cause significant — we, the field, the technology industry — cause significant harm to the world," he said.
"I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that," Altman added. "We want to work with the government to prevent that from happening."
The 38-year-old Stanford University dropout said the potential for AI to be used to manipulate voters and target disinformation are among "my areas of greatest concern'', especially because "we're going to face an election next year and these models are getting better".
Altman said his company's technology may destroy some jobs but also create new ones and that it will be important for "government to figure out how we want to mitigate that".
Altman said the government could regulate the industry by creating an agency that issues licenses for the creation of large-scale AI models, safety regulations and tests that AI models must pass before being released to the public.
This "combination of licensing and testing requirements," he said, could be applied to the "development and release of AI models above a threshold of capabilities".
Lawmakers brought up the idea of an independent agency to oversee AI, rules that force companies to disclose how their models work and the data sets they use and antitrust rules to prevent companies like Microsoft and Google from monopolizing the nascent industry.
The rapid development of ChatGPT with an estimated 100 million users within two months has sparked an industry race, with Microsoft, an investor in OpenAI, enabling ChatGPT in the Windows operating system and Google adding its own so-called generative AI systems, including one called Bard, to its app.
The latest forms of AI also have drawn criticism from some of tech's biggest names for their potential to disrupt millions of jobs, spread misinformation and perpetuate biases.
Altman said OpenAI pre-tests and constantly updates its tools to ensure safety and that making them widely available to the public actually helps the company identify and mitigate risks.
"It's important to understand that GPT-4 is a tool, not a creature," Altman said, referring to the most recent version of the system that powers ChatGPT. "And it's a tool that people have great control over."
It was Altman's first appearance before Congress. Media reports of the three-hour hearing called it a rare bipartisan event with Democrats and Republicans getting along and even complimenting each other on the collegial atmosphere in the room.
How the technology might affect elections, intellectual property theft, news coverage, military operations and even diversity and inclusion initiatives were among the topics covered.
Tech people and government officials all have expressed unease over AI's potential harms. Lawmakers described Tuesday's hearing as a first step in understanding the new AI systems. But there is a lack of consensus over a congressional response to the new world of AI systems even as members of both parties see a need for federal regulation.
"Will we strike that balance between technological innovation and our ethical and moral responsibility?" asked Missouri Senator Josh Hawley, the top Republican on the Senate Judiciary Committee.
Also testifying Tuesday was Christina Montgomery, IBM's vice-president and chief privacy and trust officer, and Gary Marcus, a former New York University professor and a self-described critic of AI "hype".
"The era of AI cannot be another era of 'move fast and break things,'" Montgomery told the lawmakers. Still, she said, "We don't have to slam the brakes on innovation either."
Connecticut Democratic Senator Richard Blumenthal, chairman of the Senate panel, said the hearing was the first in a series to learn more about the potential benefits and harms of AI to eventually "write the rules" for it. He also acknowledged Congress' failure to keep up with the introduction of new technologies in the past.
"Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past," Blumenthal said. "Congress failed to meet the moment on social media."
Agencies contributed to this story.