Home AI Meta says it’s making its Llama models available for US national security applications

Meta says it’s making its Llama models available for US national security applications

by ccadm


To combat the perception that its “open” AI is aiding foreign adversaries, Meta today said that it’s making its Llama series of AI models available to U.S. government agencies and contractors in national security.

“We are pleased to confirm that we’re making Llama available to U.S. government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work,” Meta wrote in a blog post. “We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies.”

Oracle, for example, is using Llama to process aircraft maintenance documents, Meta says. Scale AI is fine-tuning Llama to support specific national security team missions. And Lockheed Martin is offering Llama to its defense customers for use cases like generating computer code.

Meta’s policy normally forbids developers from using Llama for any projects related to military, warfare, or espionage missions. But the company’s making an exception in this case, it told Bloomberg, and exceptions for similar government agencies (and contractors) in the U.K., Canada, Australia, and New Zealand.

Last week, Reuters reported that Chinese research scientists linked to the People’s Liberation Army (PLA), the military wing of China’s ruling party, used an older Llama model, Llama 2, to develop a tool for defense applications. Chinese researchers, including two affiliated with a PLA R&D group, created a military-focused chatbot designed to gather and process intelligence, as well as offer information for operational decision-making.

Meta told Reuters in a statement that the use of the “single, and outdated” Llama model was “unauthorized” and contrary to its acceptable use policy. But the report added much fuel to the ongoing debate over the merits and risks of open AI.

The use of AI, open or “closed,” for defense is controversial.

According to a recent study from the nonprofit AI Now Institute, the AI deployed today for military intelligence, surveillance, and reconnaissance poses dangers because it relies on personal data that can be exfiltrated and weaponized by adversaries. It also has vulnerabilities, like biases and a tendency to hallucinate, that are currently without remedy, write the co-authors, who recommend creating AI that’s separate and isolated from “commercial” models.

Employees at several Big Tech companies, including Google and Microsoft, have protested their employer’s contracts to build AI tools and infrastructure for the U.S. military.

Meta asserts that open AI can accelerate defense research while promoting America’s “economic and security interests.” But the U.S. military has been slow to adopt the technology — and skeptical of its ROI. So far, the U.S. Army is the only branch of the U.S. armed forces with a generative AI deployment.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.



Source link

Related Articles