Latest Tech News

Pentagon says AI will accelerate its ‘kill chain’

Leading AI developers like OpenAI and Anthropic are threading a delicate needle to sell software to the US military: make the Pentagon more efficient, without letting its AI kill people.

Today, its tools are not used as weapons, but AI gives the Department of Defense a “significant advantage” in identifying, tracking and evaluating threats, Dr. Radha Plumb, Chief Digital and AI Officer of the Pentagon, told TechCrunch in a telephone interview.

“Obviously we’re increasing the ways in which we can accelerate the execution of the kill chain so that our commanders can respond in the right time to protect our forces,” Plumb said.

The “kill chain” refers to the military’s process of identifying, tracking and eliminating threats, which involves a complex system of sensors, platforms and weapons. Generative AI is proving helpful during the planning and strategy phases of the kill chain, according to Plumb.

The relationship between the Pentagon and AI developers is relatively new. OpenAI, Anthropic and Meta walked behind their usage policies in 2024 to allow US intelligence and defense agencies to use its AI systems. However, they do not allow their AI to harm humans.

“We’ve been really clear about what we will and won’t use their technologies,” Plumb said, when asked how the Pentagon works with AI model providers.

However, this has started a round of speed dating for AI companies and defense contractors.

Meta associated with Lockheed Martin and Booz Allenamong others, to deliver its Llama AI models to defense agencies in November. the same month, Anthropic has teamed up with Palantir. In December, OpenAI made a similar deal with Anduril. quieter, Cohere also implemented their models with Palantir.

As generative AI proves its usefulness in the Pentagon, it could push Silicon Valley to loosen its policies on the use of AI and allow more military applications.

“Playing through different scenarios is something that generative AI can be useful for,” Plumb said. “It allows you to take advantage of the full range of tools that our commanders have available, but also to think creatively about the different response options and potential exchanges in an environment where there is a potential threat, or a series of threats, that must be persecuted.”

It is not clear what technology the Pentagon is using for this work; using generative AI in the kill chain (even in the early planning phase) seems to violate the usage policies of many leading model developers. The anthropic policyfor example, it prohibits the use of its models to produce or modify “systems designed to cause damage or loss of human life.”

In response to our questions, Anthropic pointed TechCrunch towards its CEO Dario Amodei recent interview with the Financial Timeswhere he defended his military work:

The position that we should never use AI in defense and intelligence settings makes no sense to me. The position that we should go gangbusters and use it to do anything we want – up to doomsday weapons – is obviously crazy. We try to find a middle ground, to do things responsibly.

OpenAI, Meta and Cohere did not respond to TechCrunch’s request for comment.

Life and death, and AI weapons

In recent months, a defense technology debate has erupted around if AI weapons should really be allowed to make life and death decisions. Some argue that the US military already has weapons that do.

Anduril CEO Palmer Luckey recently noted on X that the US military has a long history of purchasing and using autonomous weapons systems such as a CIWS turret.

“The DoD has been acquiring and using autonomous weapons systems for decades. Their use (and export!) is well understood, strictly defined, and explicitly regulated by rules that are not voluntary,” Luckey said.

But when TechCrunch asked if the Pentagon would buy and operate weapons that are fully autonomous — ones that don’t have humans in the loop — Plumb rejected the idea on principle.

“No, that’s the short answer,” Plumb said. “In terms of reliability and ethics, we always have human beings involved in the decision to employ force, and that includes for our weapons systems.”

The word “autonomy” is a bit ambiguous and it has sparked debates throughout the tech industry about when automated systems—like AI coding agents, self-driving cars, or self-firing weapons—become truly independent.

Plumb said the idea of ​​automated systems independently making life and death decisions was “too binary,” and the reality was less “science fiction-y.” Rather, he suggested that the Pentagon’s use of AI systems is really a collaboration between humans and machines, where senior leaders make active decisions throughout the process.

“People tend to think of it as there are robots somewhere, and then the gonculator (a fictional autonomous machine) spits out a sheet of paper, and the human just checks a box,” Plumb said. “That’s not how human-machine teams work, and it’s not an efficient way to use these kinds of AI systems.”

AI Security in the Pentagon

Military partnerships haven’t always gone down well with Silicon Valley employees. Last year, dozens of Amazon and Google employees were fired and arrested after protesting his companies’ military contracts with Israelcloud offerings that fall under the code name “Project Nimbus.”

Comparatively, there has been a fairly muted response from the AI ​​community. Some AI researchers, like Anthropic’s Evan Hubinger, say the use of AI in the military is inevitable, and it’s critical to work directly with the military to make sure you get it right.

“If you take the catastrophic risks from AI seriously, the US government is an extremely important actor to engage, and simply trying to block the US government from using AI is not a viable strategy,” Hubinger said in November. post to the LessWrong online forum. “It’s not enough to focus only on catastrophic risks, you also have to prevent any way that the government could abuse your models.”


https://techcrunch.com/wp-content/uploads/2023/11/defense-tech-survey.jpg?resize=1200,675

2025-01-19 22:30:00

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button