Regulation of artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and press conferences and the White House announcing voluntary AI safety commitments by seven technology companies on Friday.
But a closer look at the activity raises questions about how meaningful the actions are to setting policy around the rapidly evolving technology.
The answer is that it still doesn’t make much sense. The United States is only at the beginning of what is likely to be a long and difficult road toward creating AI rules, lawmakers and policy experts have said. While there have been hearings, meetings with top tech executives at the White House, and speeches to introduce AI bills, it is too soon to predict even the roughest outlines of regulations to protect consumers and contain the risks technology poses to jobs, the spread of misinformation, and security.
“It’s still early days, and no one knows what a law will look like yet,” said Chris Lewis, president of consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other technology companies.
The United States remains far behind Europe, where lawmakers are preparing to enact an artificial intelligence law this year that would place new restrictions on what are considered the riskiest uses of the technology. On the contrary, there continues to be much disagreement in the United States about the best way to handle a technology that many US lawmakers are still trying to understand.
That suits a lot of tech companies, policy experts said. While some of the companies have said they welcome the rules on AI, they have also opposed strict regulations similar to those being created in Europe.
Here is a summary of the state of AI regulations in the United States.
in the white house
The Biden administration has been on a quick listening tour with artificial intelligence companies, academics, and civil society groups. The effort began in May when Vice President Kamala Harris met with the CEOs of Microsoft, Google, OpenAI and Anthropic at the White House and urged the tech industry to take security more seriously.
On Friday, representatives from seven technology companies appeared at the White House to announce a set of principles to make their AI technologies more secure, including third-party security checks and watermarking of AI-generated content to help stop the spread of misinformation.
Many of the practices that were announced had already been implemented at OpenAI, Google, and Microsoft, or were on their way to becoming effective. They do not represent new regulations. The promises of self-regulation also fell short of the expectations of consumer groups.
“Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators should establish meaningful and enforceable safeguards to ensure that the use of AI is fair, transparent, and protects the privacy and civil rights of individuals.”
Last fall, the White House presented a Blueprint for an AI Bill of Rights, a set of guidelines on consumer protection with technology. The guidelines are also not regulations and cannot be enforced. This week, White House officials said they were working on an AI executive order, but did not disclose details or timing.
The loudest drumbeat on AI regulation came from lawmakers, some of whom have introduced bills on the technology. His proposals include creating an agency to oversee AI, accountability for AI technologies that spread disinformation, and licensing requirements for new AI tools.
Lawmakers also held hearings on AI, including a hearing in May with Sam Altman, CEO of OpenAI, which makes the ChatGPT chatbot. Some lawmakers have discussed ideas for other regulations during hearings, including nutrition labels to notify consumers of AI risks.
The bills are in their early stages and so far do not have the necessary support to move forward. Last month, Senate Leader Chuck Schumer, D-N.Y., announced a month-long process for creating AI legislation that included educational sessions for members in the fall.
“In many ways, we are starting from scratch, but I think Congress is up to the challenge,” he said during a speech at the Center for Strategic and International Studies.
in federal agencies
Regulatory agencies are beginning to take action by keeping an eye on some problems emanating from AI.
Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT, requesting information about how the company protects its systems and how the chatbot could harm consumers by creating false information. FTC Chair Lina Khan has said she believes the agency has broad power under consumer protection and competition law to police problematic behavior by AI companies.
“Waiting for Congress to act is not ideal given Congress’s usual timetable for action,” said Andres Sawicki, a law professor at the University of Miami.