The tone of congressional hearings involving tech industry executives in recent years can best be described as antagonistic. Mark Zuckerberg, Jeff Bezos and other tech luminaries have been criticized on Capitol Hill by lawmakers angry with their companies.
But on Tuesday, Sam Altman, chief executive of OpenAI, a San Francisco start-up, testified before members of a Senate subcommittee and largely agreed with them on the need to regulate the increasingly powerful AI technology that create within your company and others like Google. and Microsoft.
In his first congressional testimony, Mr. Altman implored lawmakers to regulate artificial intelligence while committee members displayed a fledgling understanding of the technology. The hearing underscored the deep concern among technologists and the government about the potential harms of AI. But that concern did not extend to Mr. Altman, who had a friendly audience with the subcommittee members.
The emergence of Mr. Altman, a 38-year-old tech entrepreneur and Stanford University dropout, was his christening as the leading figure in artificial intelligence. for the three-hour hearing.
Mr. Altman also discussed his company’s technology at a dinner with dozens of House members Monday night and met privately with several senators ahead of the hearing. He offered a flexible framework for managing what happens next with rapidly developing systems that some believe could fundamentally change the economy.
“I think if this technology goes wrong, it can go pretty bad. And we want to be vocal about it,” she said. “We want to work with the government to prevent that from happening.”
Mr. Altman made his public debut on Capitol Hill as interest in AI skyrocketed. Tech giants have poured effort and billions of dollars into what they say is transformative technology, even amid growing concerns about the role of AI in spreading misinformation, destroying jobs and someday intelligence human.
That has put technology in the spotlight in Washington. President Biden said this month in a meeting with a group of CEOs of artificial intelligence companies that “what they are doing has enormous potential and enormous danger.” Top congressional leaders have also promised AI regulations.
It became clear that members of the Senate subcommittee on privacy, technology, and the law were not planning a harsh cross-examination of Mr. Altman, as they thanked Mr. Altman for his private meetings with them and for agreeing to appear at the hearing. . Cory Booker, a New Jersey Democrat, repeatedly referred to Mr. Altman by his first name.
Mr. Altman was joined in the hearing by Christina Martin, IBM’s director of trust and privacy, and Gary Marcus, a well-known professor and frequent critic of AI technology.
Mr. Altman said his company’s technology can destroy some jobs, but also create new ones, and that it will be important for “the government to figure out how we want to mitigate that.” He proposed creating an agency that would issue licenses for large-scale AI model creation, safety standards, and tests that AI models must pass before being released to the public.
“We believe that the benefits of the tools we have implemented so far far outweigh the risks, but ensuring their security is vital to our work,” Mr. Altman said.
But it was unclear how lawmakers would respond to the call to regulate AI. Congress’s record on technology regulation is dismal. Dozens of privacy, speech and security bills have failed in the past decade due to partisan bickering and fierce opposition from tech giants.
The United States has lagged behind the world when it comes to regulations on privacy, speech, and the protection of children. It is also behind on AI regulations. Lawmakers in the European Union are set to introduce rules for the technology later this year. And China has created AI laws that comply with its censorship laws.
Sen. Richard Blumenthal, a Connecticut Democrat and chairman of the Senate panel, said the hearing was the first in a series to learn more about the potential benefits and harms of AI to eventually “write the rules.”
He also acknowledged the failure of Congress to keep up with the introduction of new technologies in the past. “Our goal is to demystify and hold these new technologies accountable to avoid some of the mistakes of the past,” Blumenthal said. “Congress failed to deliver on the moment on social media.”
Subcommittee members suggested an independent agency to oversee AI; rules that force companies to disclose how their models work and the data sets they use; and antitrust rules to prevent companies like Microsoft and Google from monopolizing the nascent market.
“The devil will be in the details,” said Sarah Myers West, CEO of the AI Now Institute, a policy think tank. She said Mr Altman’s suggestions for regulations do not go far enough and should include limits on how AI is used in surveillance and the use of biometric data. She noted that Mr. Altman showed no indication of slowing down the development of OpenAI’s ChatGPT tool.
“It is a great irony to see a stance on concern for damage from people who are rapidly releasing the system responsible for that very damage into commercial use,” Ms West said.
Some lawmakers in the audience still pointed to the lingering tech-savvy gap between Washington and Silicon Valley. Lindsey Graham, a South Carolina Republican, repeatedly asked witnesses whether a speech liability shield for online platforms like Facebook and Google also applies to AI.
Calm and collected, Mr. Altman tried several times to draw a distinction between AI and social media. “We need to work together to find a whole new approach,” he said.
Some subcommittee members were also reluctant to clamp down on an industry that holds great economic promise for the United States and competes directly with adversaries such as China.
The Chinese are creating AIs that “reinforce the core values of the Chinese Communist Party and the Chinese system,” said Chris Coons, D-Del. “And I am concerned with how we promote AI that reinforces and strengthens open markets, open societies and democracy.”
Some of the most difficult questions and comments for Mr. Altman came from Dr. Marcus, who pointed out that OpenAI has not been transparent about the data it uses to develop its systems. He expressed doubts about Mr. Altman’s prediction that new jobs will replace those eliminated by AI.
“We have unprecedented opportunities here, but we also face a perfect storm of corporate irresponsibility, widespread deployment, lack of proper regulation, and inherent unreliability,” said Dr. Marcus.
Tech companies have argued that Congress should be careful about general rules that lump together different types of AI. At Tuesday’s hearing, IBM’s Ms Martin called for an AI law that is similar to Europe’s proposed regulations, which outline various levels of risk. She called for rules that focus on specific uses, not regulate the technology itself.
“At its core, AI is just a tool, and tools can serve different purposes,” he said, adding that Congress should take a “precision regulatory approach to AI.”