Senate letter to Meta on LLaMA leak is a threat to open-source AI, say experts

Senate letter to Meta on LLaMA leak is a threat to open-source AI, say experts

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More



A letter sent by two U.S. senators to Meta CEO Mark Zuckerberg on Tuesday, which questioned the leak in March of Meta’s popular open-source large language model LLaMA, sends a threat to the open-source AI community, say experts. It is notable because it comes at a key moment when Congress has prioritized regulating artificial intelligence, while open-source AI is seeing a wave of new LLMs.

For example, three weeks ago, OpenAI CEO Sam Altman testified before the Senate Subcommittee on Privacy, Technology & the Law — Senator Richard Blumenthal (D-CT) is the chair and Senator Josh Hawley (R-MO) its ranking member — and agreed with calls for a new AI regulatory agency

The letter to Meta (which declined to comment at this time) was sent by Blumenthal and Hawley on behalf of the same subcommittee. The senators said they are concerned about LLaMA’s “potential for its misuse in spam, fraud, malware, privacy violations, harassment, and other wrongdoing and harms.”

The letter pointed to LLaMA’s release In February, saying that Meta released LLaMA for download by approved researchers, “rather than centralizing and restricting access to the underlying data, software, and model.” It added that Meta’s “choice to distribute LLaMA in such an unrestrained and permissive manner raises important and complicated questions about when and how it is appropriate to openly release sophisticated AI models.”

Event

Transform 2023








Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.






Register Now


Concerns about attempts throw open-source AI ‘under the bus’

Several experts said they were “not interested” in conspiracy theories, but had concerns about machinations behind the scenes.

“Look, it’s easy for both government officials and proprietary competitors to throw open source under the bus, because policymakers look at it nervously as something that’s harder to control — and proprietary software providers look at it as a form of competition that they would rather just see go away in some cases,” Adam Thierer, innovation policy analyst at R Street Institute, told VentureBeat in an interview. “So that makes it an easy target.”

William Falcon, CEO of Lightning AI and creator of the open-source PyTorch Lightning, was even clearer, saying that the letter was “super surprising,” and while he didn’t want to “feed conspiracy theories,” it “almost feels like OpenAI and Congress are working together now.”

And Steven Weber, a professor at the School of Information and the department of political science at the University of California, Berkeley, went even further, telling VentureBeat that he thinks Microsoft, operating through OpenAI, is “running scared, in the same way that Microsoft ran scared of Linux in the late 1990s and referred to open-source software as a ‘cancer‘ on the intellectual property system.” Steve Ballmer, he recalled, “called on his people … to convince people that open source was evil, when in fact what it was was a competitive threat to Windows.”

Releasing LLaMA was ‘not an unacceptable risk’

Christopher Manning, director of the Stanford AI Lab, told VentureBeat in a message that while there is not currently legislation or “strong community norms about acceptable practice” when it comes to AI, he “strongly encouraged” the government and AI community to work to develop regulations and norms applicable to all companies, communities and individuals developing or using large AI models.

Nevertheless, he said, “In this instance, I am happy to support the open-source release of models like the LLaMA models.” While he does “fully acknowledge” that models like LLaMA can be used for bad purposes, such as disinformation or spam, he said they are smaller and less capable than the largest models built by OpenAI, Anthropic and Google (roughly 175 billion to 512 billion parameters).

Conversely, he said that while LLaMA’s models are larger and of better quality than models released by open-source collectives, they are not dramatically bigger (the largest LLaMA model is 60 billion parameters; the GPT-Neo-X model released by the distributed collective of EleutherAE contributors is 20 billion parameters).

“As such, I do not consider their release an unacceptable risk,” he said. “We should be cautious about keeping good technology from innovative companies and students trying to learn about and build the future. Often it is better to regulate uses of technology rather than the availability of the technology.”

A ‘misguided’ attempt to limit access

Vipul Ved Prakash, co-founder and CEO of Together, which runs the RedPajama open-source project which replicated the LLaMA dataset to build open-source, state-of-the-art LLMs, said that the Senate’s letter to Meta is a “misguided attempt at limiting access to a new technology.”

The letter, he pointed out, is “full of typical straw-man concerns.”

For instance, he said, “it makes no sense to use a language model to generate spam. I helped create what is possibly the most widely deployed anti-spam system on the Internet today, and I can say with confidence that spammers won’t be using LLaMA or other LLMs because there are significantly cheaper ways of creating spam messages.”

Many of these concerns, he went on, are “applicable to programming languages that allow you to develop novel programs, and some of these programs are written with malicious intent. But we don’t limit sophisticated programming languages as a society, because we value capability and functionality they bring into our lives.”

In general, he said the discourse around AI safety is a “panicked response with little to zero supporting evidence of societal harms.” Prakash said he worries about it leading to the “squelching of innovation in America and handing over the keys to the most important technology of our generation to a few companies, who have proactively shaped the debate.”

One question is why Meta’s models are being singled out (beyond the fact that Meta has had run-ins with Congress for decades). After all, both Manning and Falcon pointed out that a new model by the UAE government-backed Technology Innovation Institute made available an even better quality 40 billion- parameter model, Falcon.

“So it wouldn’t have made much difference to the rate of progress or LLM dissemination whether or not LLaMA was released,” said Manning, while Falcon questioned what the U.S. government could do about its release: “What are they going to do? Tell the UAE they can’t make the model public?”

Thierer claimed that this is where the “politics of intimidation” come in. The Blumenthal/Hawley letter, he explained, is “a threat made to open source through what I’ll call a ‘nasty gram’ — a nasty letter saying ‘you should reconsider your position on this.’ They’re not saying we’re going to regulate you, but there’s certainly an ‘or else’ statement hanging in the room that looms above a letter like that.”

That, he says, is what’s most troubling. “At some point, lawmakers will start to put more and more pressure on other providers or platforms who may do business with or provide a platform for open-source applications or models,” he said. “And that’s how you get to regulating open source without formally regulating open source.”



VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.