×
Microsoft Backs Anthropic in Pentagon Fallout Despite Heated Rivalry

Microsoft Backs Anthropic in Pentagon Fallout Despite Heated Rivalry

Microsoft is taking Anthropic’s side in its fight to overturn the Pentagon’s “supply-chain risk” designation.

Late last month, the Pentagon dropped all contracts with Anthropic and labeled the AI company a “supply-chain risk” after the latter refused to drop safeguards against mass surveillance and completely autonomous weapons. In response, Anthropic filed two lawsuits against the Department of Defense last week.

In an amicus brief filed on Tuesday in support of those lawsuits, Microsoft asked the federal court to issue a temporary block to the DoD designation until the case is decided.

“A temporary restraining order will permit the parties to pursue a negotiated resolution that will better serve all involved and avoid wide-ranging negative business impacts,” Microsoft wrote in the filing.

The designation requires all companies working with the Pentagon to ditch Anthropic’s models in work for the Department, effective immediately, even though the agencies have a six-month phase-out period. Microsoft, which is both a long-time government contractor and an investor in Anthropic, warned that decoupling will be tough.

“A temporary restraining order will enable a more orderly transition and avoid disrupting the American military’s ongoing use of advanced AI,” Microsoft wrote in the filing. “Otherwise, Microsoft and other technology companies must act immediately to alter existing product and contract configurations used by DoW. This could potentially hamper U.S. warfighters at a critical point in time.” (DoW is in reference to the Department of War, the Trump administration’s preferred renaming of the Department of Defense.)

AI has been a crucial helper in increasing the speed and scale of the U.S. attack on Iran. The scale of the attacks has been massive, “double” that of the initial “shock and awe” phase of the 2003 invasion of Iraq, according to military experts. In just the first 24 hours, the U.S. hit roughly a thousand targets, per Bloomberg. One of these targets was allegedly an elementary school in southern Iran.

Anthropic is leaning into its new pro-humanity stance, with the company announcing on Wednesday that it was launching an internal think tank to research the large-scale implications and dangers AI may present on things like the economy and safety. According to The Verge, as part of the changes, cofounder Jack Clark will be leading the think tank under his new title, “head of public benefit.”

But Anthropic did willingly lend Claude’s services to the U.S. military at one point. The technology was reportedly used in the capture of Venezuelan President Nicolas Maduro, and is allegedly still being used by the military in Iran. The Pentagon’s Central Command still uses Claude in some capacity, according to the Wall Street Journal.

The DoD has nothing to worry about with its Anthropic breakup, though, because OpenAI was quick to fill the space left behind by Anthropic, much to the dismay of ChatGPT users and some company employees.

The government-level transition to OpenAI is currently underway, and the State Department reportedly already shifted its internal chatbot model from Anthropic’s Claude Sonnet 4.5 to OpenAI’s GPT-4.1.

Even though OpenAI once banned AI from being used for military purposes, WIRED reported last week that the Pentagon had been testing OpenAI’s models through a Microsoft Azure workaround as far back as 2023.

Microsoft’s amicus brief comes on the heels of yet another brief in support of Anthropic, this time signed by 37 employees at both Google and OpenAI.

Source link
#Microsoft #Backs #Anthropic #Pentagon #Fallout #Heated #Rivalry

Post Comment