Seven leading AI companies in the United States have agreed to voluntary safeguards on the development of the technology, the White House announced Friday, pledging to manage the risks of new tools even as they compete for the potential of artificial intelligence.
The seven companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—will formally announce their commitment to new standards in the areas of security and trust in a meeting with President Biden at the White House on Friday afternoon.
The announcement comes as companies compete to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human intervention. But technological advances have raised fears about the spread of misinformation and dire warnings of “extinction risk” as self-aware computers evolve.
The voluntary safeguards are just an initial, tentative step as Washington and governments around the world rush to establish legal and regulatory frameworks for the development of artificial intelligence. They reflect an urgency on the part of the Biden administration and lawmakers to respond to rapidly evolving technology, even as lawmakers have struggled to regulate social media and other technologies.
The White House did not provide details about an upcoming presidential executive order that will address a larger problem: how to control the ability of China and other competitors to get hold of new artificial intelligence programs or the components used to develop them.
That means new restrictions on advanced semiconductors and restrictions on the export of large language models. Those are hard to control: much of the software can fit, compressed, on a USB stick.
An executive order could provoke more industry opposition than Friday’s voluntary commitments, which experts said were already reflected in the practices of the companies involved. The promises will not slow down the plans of AI companies or hinder the development of their technologies. And as voluntary commitments, government regulators will not enforce them.
“We are pleased to make these voluntary commitments along with others in the industry,” said Nick Clegg, president of global affairs at Meta, Facebook’s parent company, in a statement. “They are an important first step in ensuring that responsible guardrails are set for AI and create a model for other governments to follow.”
As part of the safeguards, the companies agreed to:
-
Security testing of its AI products, in part by independent experts, and to share information about its products with governments and others trying to manage technology risks.
-
Ensure that consumers can detect AI-generated material by implementing watermarks or other means to identify generated content.
-
Publicly report the capabilities and limitations of your systems on a regular basis, including security risks and evidence of bias.
-
Deploy advanced artificial intelligence tools to address society’s biggest challenges, like curing cancer and combating climate change.
-
Conduct research on the risks of bias, discrimination and invasion of privacy from the spread of AI tools.
In a statement announcing the deals, the Biden administration said companies must ensure that “innovation does not come at the expense of the rights and security of Americans.”
“The companies that are developing these emerging technologies have a responsibility to ensure that their products are secure,” the administration said in a statement.
Brad Smith, Microsoft chairman and one of the executives who attended the White House meeting, said his company supports voluntary safeguards.
“By acting quickly, the White House commitments create a foundation to help ensure that the promise of AI stays ahead of its risks,” said Mr. Smith.
Anna Makanju, OpenAI’s vice president of global affairs, described the announcement as “part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance.”
For companies, the standards outlined Friday serve two purposes: as an effort to prevent, or shape, self-monitoring legislative and regulatory moves, and as a signal that they are dealing with this new technology thoughtfully and proactively.
But the rules they agreed to are very much the lowest common denominator, and each company may interpret them differently. For example, companies are committed to strict cybersecurity around the data and code used to make the “language models” upon which generative AI programs are built. But there’s no specificity about what that means, and companies would have an interest in protecting their intellectual property anyway.
And even the most careful companies are vulnerable. Microsoft, one of the companies that attended the White House event with Biden, scrambled last week to counter a Chinese government-organized attack on the private emails of US officials dealing with China. Now it appears that China stole, or otherwise obtained, a “private key” held by Microsoft that is the key to authenticating emails, one of the company’s best-protected pieces of code.
As a result, the agreement is unlikely to slow down efforts to pass laws and impose regulations on emerging technology.
Paul Barrett, deputy director of New York University’s Stern Center for Business and Human Rights, said more needs to be done to guard against the dangers that artificial intelligence poses to society.
“The voluntary commitments announced today cannot be enforced, so it is vital that Congress, together with the White House, immediately craft legislation that requires transparency, privacy protections, and intensifies research into the broad range of risks posed by generative AI,” Mr. Barrett said in a statement.
European regulators are set to adopt AI laws later this year, which has led many of the companies to encourage US regulations. Several lawmakers have introduced bills that include licenses for artificial intelligence companies to release their technologies, the creation of a federal agency to oversee the industry, and data privacy requirements. But members of Congress are far from agreeing on the rules and are rushing to educate themselves on the technology.
Lawmakers have been grappling with how to address the rise of artificial intelligence technology, with some focused on the risks to consumers, while others are keenly concerned about falling behind adversaries, particularly China, in the race for dominance in the field.
This week, the House select committee on strategic competition with China sent bipartisan letters to US-based venture capital firms, demanding a reckoning over investments they had made in Chinese AI and semiconductor companies. Those letters add to months in which a variety of House and Senate panels have been questioning the AI industry’s most influential businessmen and critics to determine what kinds of legislative protections and incentives Congress should explore.
Many of those witnesses, including Sam Altman of San Francisco start-up OpenAI, have pleaded with lawmakers to regulate the AI industry, pointing to the potential for the new technology to cause undue harm. But that regulation has been slow to get off the ground in Congress, where many lawmakers are still struggling to understand what exactly AI technology is.
In an effort to improve understanding among lawmakers, Senator Chuck Schumer, D-N.Y. and Majority Leader, launched a series of listening sessions for lawmakers this summer to hear from government officials and experts about the merits and dangers of artificial intelligence in various fields.
Mr. Schumer also prepared an amendment to the Senate version of this year’s defense authorization bill to incentivize Pentagon employees to report potential problems with AI tools through a “bug bounty” program, commission a Pentagon report on improving AI data sharing, and improve reporting on AI in the financial services industry.
karoun demirjian contributed reporting from Washington.