The White House will host its first meeting of CEOs of companies building artificial intelligence on Thursday since the rise of AI-powered chatbots has sparked growing calls to regulate the technology.
Vice President Kamala Harris and other administration officials are scheduled to meet with leaders from Google, Microsoft, OpenAI, the maker of the popular chatbot ChatGPT, and Anthropic, an artificial intelligence startup, to discuss the technology.
The White House planned to convince companies that they had a responsibility to address the risks of new AI developments. “Our goal is to have a candid discussion about the risks we each see in current and near-term AI development, actions to mitigate those risks, and other ways we can work together to ensure the American people benefit from advances. in AI while it is protected from harm,” Arati Prabhakar, director of the White House office of science and technology policy, said in a meeting invitation obtained by The New York Times.
Hours before the meeting, the White House announced that the National Science Foundation plans to spend $140 million on new AI research centers. The administration also pledged to release preliminary guidelines for government agencies to ensure that the use of AI safeguards “the rights of the American people.” and security,” adding that several AI companies agreed to make their products available for scrutiny in August at a cybersecurity conference.
The White House has been under increasing pressure to control AI that is capable of creating sophisticated prose and realistic imagery. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public, and people immediately began using it to find information, do schoolwork, and help them with their jobs. Since then, some of the biggest tech companies have rushed to embed chatbots into their products and accelerated AI research, while venture capitalists have poured money into AI startups.
But the rise of AI has also raised questions about how the technology will transform economies, shake up geopolitics and bolster criminal activity. Critics worry that many AI systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people at their jobs, spread disinformation, and perhaps even break the law themselves.
President Biden said recently that “it remains to be seen” whether AI is dangerous, and some of his top appointees have vowed to intervene if the technology is used in harmful ways.
Spokespersons for Google, Microsoft and OpenAI declined to comment ahead of the White House meeting. An Anthropic spokesperson confirmed that the company would attend.
The announcements build on previous efforts by the administration to put railings on the AI. Last year, the White House released what it called a blueprint for an AI bill of rights, which said automated systems should protect the privacy of user data, protect users from discriminatory results. and clarify why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in AI development, which had been in the works for years.
The introduction of chatbots like ChatGPT and Google’s Bard has put enormous pressure on governments to act. The European Union, which had already been negotiating regulations for AI, has faced new demands to regulate a broader swath of AI, rather than just systems that are considered inherently high-risk.
In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to write or propose legislation to regulate AI.
A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible innovation” while punishing violations of the law committed with technology.
In an invited essay in The Times on Wednesday, Lina Khan, chairwoman of the Federal Trade Commission, said the nation was at a “key decision point” on AI. She compared recent developments in technology to the birth of tech giants like Google and Facebook. , and she warned that without proper regulation, the technology could entrench the power of the biggest tech companies and give scammers a powerful tool.
“As the use of AI becomes more widespread, public officials have a responsibility to ensure that this hard-learned history is not repeated,” he said.