The biggest companies in the tech industry have spent the year warning that the development of artificial intelligence technology is exceeding their wildest expectations and that they need to limit who has access to it.

Mark Zuckerberg is doubling down in a different direction: He’s giving it away.

Zuckerberg, Meta’s chief executive, said on Tuesday that he planned to provide the code behind the company’s latest and greatest AI technology to developers and software enthusiasts around the world for free.

The move, similar to the one Meta made in February, could help the company beat competitors like Google and Microsoft. Those companies have moved more quickly to incorporate generative artificial intelligence, the technology behind OpenAI’s popular ChatGPT chatbot, into their products.

“When the software is open, more people can examine it to identify and fix potential problems,” Zuckerberg said in a post on his personal Facebook page.

The latest version of Meta’s AI was built with 40 percent more data than the one the company released just a few months ago and is believed to be considerably more powerful. And Meta provides a detailed roadmap showing how developers can work with the vast amount of data it has collected.

Researchers are concerned that generative AI could overwhelm the amount of misinformation and spam on the internet, and pose dangers that even some of its creators don’t fully understand.

Meta adheres to the long-held belief that allowing all kinds of programmers to play with technology is the best way to improve it. Until recently, most AI researchers were on board with that. But last year, companies like Google, Microsoft and OpenAI, a San Francisco startup, have placed limits on who has access to their latest technology and controlled what can be done with it.

The companies say they are limiting access for security reasons, but critics say they are also trying to stifle competition. Meta argues that it’s best for everyone to share what they’re working on.

“Historically, Meta has been a big proponent of open platforms, and it has really worked well for us as a company,” Ahmad Al-Dahle, Meta’s vice president of generative AI, said in an interview.

The move will make software “open source,” which is computer code that can be freely copied, modified, and reused. The technology, called LLaMA 2, provides everything anyone would need to create online chatbots like ChatGPT. LLaMA 2 will be released under a commercial license, which means developers can build their own businesses using the underlying Meta AI to power them, all for free.

By opening LLaMA 2, Meta can capitalize on improvements made by programmers outside the company while, Meta executives hope, spurring experimentation with AI.

Meta’s open source approach is not new. Companies often use open source technologies in an effort to catch up with their rivals. Fifteen years ago, Google opened up its Android mobile operating system to better compete with Apple’s iPhone. While the iPhone had an early lead, Android eventually became the dominant software used on smartphones.

But the researchers argue that someone could implement Meta’s AI without the safeguards that tech giants like Google and Microsoft often use to suppress toxic content. The newly created open source models could be used, for example, to flood the Internet with even more spam, financial scams, and misinformation.

LLaMA 2, short for Large Language Model Meta AI, is what scientists call a large language model, or LLM Chatbots like ChatGPT and Google Bard are built with large language models.

Models are systems that learn skills by analyzing huge volumes of digital text, including Wikipedia articles, books, online forum conversations, and chat logs. By identifying patterns in text, these systems learn to generate their own text, including term papers, poetry, and computer code. They can even hold a conversation.

Meta is partnering with Microsoft to open LLaMA 2, which will run on Microsoft’s Azure cloud services. LLaMA 2 will also be available through other providers, including Amazon Web Services and the company HuggingFace.

Dozens of Silicon Valley technologists signed a statement of support for the initiative, including venture capitalist Reid Hoffman and executives from Nvidia, Palo Alto Networks, Zoom and Dropbox.

Meta isn’t the only company pushing open source AI projects. The Institute for Technological Innovation produced Falcon LLM and released the code for free this year. Mosaic ML also offers open source software to train LLMs

Meta executives argue that their strategy is not as risky as many believe. They say that people can already generate vast amounts of misinformation and hate speech without using AI, and that Meta social networks like Facebook can strictly restrict such toxic material. They argue that releasing the technology could eventually strengthen the ability of Meta and other companies to fight abuses of the software.

Meta conducted additional tests of LLaMA 2’s “Red Team” before launching it, Al-Dahle said. That’s a term for testing software for potential misuse and discovering ways to protect against such abuse. The company will also release a responsible use guide that contains best practices and guidelines for developers who want to create programs using the code.

But these tests and guidelines apply only to one of the models Meta is releasing, which will be trained and tuned to contain railings and inhibit misuse. Developers will also be able to use the code to create chatbots and programs without security barriers, a move that skeptics see as a risk.

In February, Meta released the first version of LLaMA to academics, government researchers, and others. The company also allowed academics to download LLaMA after being trained in large amounts of digital text. Scientists call this process “releasing the weights.”

It was a remarkable move because analyzing all that digital data requires huge computing and financial resources. With the weights, anyone can build a chatbot much cheaper and easier than from scratch.

Many in the tech industry believed that Meta was setting a dangerous precedent, and after Meta shared its AI technology with a small group of academics in February, one of the researchers leaked the technology onto the public Internet.

In a recent opinion piece in financial timeNick Clegg, Meta’s president of global public policy, argued that “it was not sustainable to keep critical technology in the hands of a few large corporations,” and that companies that released open source software had also historically been served strategically.

“I’m really looking forward to seeing what you all build!” Zuckerberg said in his post.

Leave a Reply

Your email address will not be published. Required fields are marked *