close
close

Mondor Festival

News with a Local Lens

OpenAI gets closer to the Pentagon thanks to a partnership with a defense startup
minsta

OpenAI gets closer to the Pentagon thanks to a partnership with a defense startup

OpenAI concluded its first major defense partnershipa deal that could see the AI ​​giant enter the Pentagon.

The joint venture was recently announced to be worth $1 billion Anduril Industriesa defense startup owned by Oculus VR co-founder Palmer Lucky that sells guard towers, communications jammers, military drones and autonomous submarines. The “strategic partnership” will integrate OpenAI’s AI models into Anduril systems to “rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness.” Anduril already supplies anti-drone technologies to the US government. He was recently chosen for develop and test unmanned combat aircraft and won a $100 million contract with the Pentagon’s Principal Office of Digital and AI.

SEE ALSO:

Sora reportedly shipped as part of ’12 Days of OpenAI’ livestream marathon

OpenAI clarified the Washington Post that the partnership will only cover systems that “defend against unmanned aerial threats” (read: detect and shoot down drones), notably avoiding the explicit association of its technology with military applications causing human casualties. OpenAI and Anduril say the partnership will keep the U.S. on par with China’s AI advancements – a repeated goal echoed in the U.S. government’s “Manhattan Project”-style investments in AI And “the effectiveness of government.

Crushable speed of light

“OpenAI is developing AI to benefit the masses and supports U.S.-led efforts to ensure the technology respects democratic values,” wrote Sam Altman, CEO of OpenAI. “Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel and help the national security community understand and use this technology responsibly to keep our citizens safe and free.”

In January, OpenAI political language quietly removed which has banned applications of its technologies that pose a high risk of physical harm, including “military and war”. An OpenAI spokesperson told Mashable at the time: “Our policy does not allow our tools to be used to harm people, develop weapons, monitor communications, or harm others or destroy property.” . However, there are national security use cases that align with our mission. For example, we already work with DARPA stimulate the creation of new cybersecurity tools to secure the open source software on which critical infrastructure and industry depend. It was unclear whether these beneficial use cases would have been allowed under the heading “military” in our previous policies. »

Over the past year, the company has reportedly offered its services in various capacities to the U.S. military and national security offices, with support from a former software company security executive and contractor government Palantir. And OpenAI isn’t the only AI innovator looking toward military applications. Tech companies Anthropic, the makers of Claude and Palantir, recently announced a partnership with Amazon Web Services to sell Anthropic’s AI models to defense and intelligence agencies, billed as “decision advantage” tools for “classified environments.”

Recent Rumors Suggest President-Elect Donald Trump Is Considering Palantir CTO Shyam Shankir to take over the top engineering and research position at the Pentagon. Shankir has previously criticized the Defense Department’s technology acquisition process, arguing that the government should rely less on large defense contractors and buy more “off-the-shelf technology.”