Selected menu has been deleted. Please select the another existing nav menu.
=

Pentagon inks deal with Google for AI services

Lorem ipsum dolor sit amet consectetur. Facilisis eu sit commodo sit. Phasellus elit sit sit dolor risus faucibus vel aliquam. Fames mattis.

HTML tutorial


The Pentagon and Google have reached an agreement for the Defense Department to use the tech company’s powerful Gemini AI systems on classified networks, according to a U.S. official familiar with the deal.Subscribe to read this story ad-free Get unlimited access to ad-free articles and exclusive content.The official spoke on the condition of anonymity because they were not authorized to disclose details of the deal. The exact contents and details of the new contract remain unclear.The deal follows similar agreements with other leading AI companies, including OpenAI and xAI. Defense Secretary Pete Hegseth has made adopting AI a top priority for the armed forces, vowing to transform the military into “an Al-first warfighting force.”A Google spokesperson did not answer specific questions about the deal, which was first reported by technology news outlet The Information.“We are proud to be part of a broad consortium of leading AI labs and technology and cloud companies providing AI services and infrastructure in support of national security,” Google spokesperson Kate Dreyer said in an email to Jattvibe News. “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.”The Defense Department has embraced AI over the past decade, using automated systems for everything from analyzing drone footage in the fight against the Islamic State group to streamlining logistics and eliminating pay discrepancies for soldiers. It is currently using AI to analyze intelligence and provide targeting support in the war with Iran.Michael Horowitz, a former senior defense official and current professor at the University of Pennsylvania, said the deal “to use Google’s AI models for classified purposes illustrates the growing importance of AI for U.S. national security.”However, Horowitz noted that Google’s AI systems were already being used on unclassified systems, so “it’s not surprising that they came to an agreement on classified uses.”Over the past few months, the Pentagon has sought to negotiate new contracts with America’s four largest AI companies to include language allowing “any lawful use” of their AI systems. The Pentagon announced the initial, exploratory contracts with Google, OpenAI, Anthropic and xAI in July.Those moves have sparked some controversy, most notably with Anthropic. The company, led by CEO Dario Amodei, sought stronger guarantees from the Pentagon that it would not use the company’s AI models for domestic mass surveillance or direct control of lethal autonomous weapons. It’s not clear whether Google sought any such guarantees. The U.S. official who spoke with Jattvibe News said the Google deal covered lawful use by the Defense Department.FORSUBSCRIBERSThe far-reaching threat of AI in security — and what’s at risk04:49While Google avoided a public spat with the Pentagon, it has faced some pushback from its own employees. On Monday, Bloomberg News reported that around 600 Google workers sent a letter to CEO Jattvibedar Pichai urging him to refuse new AI partnerships with the Pentagon.It’s not Google’s first time dealing with employee unrest over its work with the military. In 2018, thousands of Google employees protested the company’s role in a secretive Pentagon program called Project Maven. Operated in partnership with data analytics company Palantir, Maven remains one of the Defense Department’s leading AI programs.Google decided not to renew the Project Maven contract in the wake of the employee opposition. Pichai said at the time that the company would not pursue any AI application “for surveillance violating internationally accepted norms” or for weapons whose main goal “is to cause or directly facilitate injury to people.”The use of AI by the government in domestic surveillance and direct control of deadly automatic weapons have been areas of particular focus both within the AI industry and among civil society groups, though that has done little to slow the government’s adoption or moves by tech giants to sign deals.Those concerns became a public controversy earlier this year. In a late February blog post outlining these two red lines, Amodei, the Anthropic CEO, wrote that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”After setting an ultimatum for Anthropic to comply with the Pentagon’s wishes enabling AI use for any lawful purpose — which could exceed Anthropic’s accepted scope for use — Hegseth declared Anthropic a “supply-chain risk to national security,” a designation usually reserved for foreign adversaries. The Defense Department has said it will look to scale back its use of Anthropic’s models in the coming months.President Donald Trump also announced in late February that he would ban all federal agencies from using Anthropic’s products, calling Anthropic a group of “Leftwing nut jobs.” Anthropic is suing the Defense Department and the relevant federal agencies to undo the fiats. The case is split between California, where a judge ordered a preliminary halt to the offloading of Anthropic systems, and Washington, D.C., where the court opted not to issue a similar injunction. Shortly after Anthropic was labeled a threat to national security, OpenAI announced that it had struck a similar deal with the Pentagon to bring its AI models to the Defense Department’s classified networks. However, the announcement was met with public outcry about a perceived lack of guardrails on the Pentagon’s potential use of OpenAI’s systems, particularly regarding the surveillance of Americans.As a result, OpenAI and CEO Sam Altman reworked the agreement’s language just days later, with the updated deal specifying that any service from OpenAI “shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”Brian McGrail, senior counsel at the Center for AI Safety, said in March that intelligence and national security agencies often take very liberal interpretations of contract provisions about surveillance. McGrail said that because these contracts remain private, it is often difficult to judge the robustness of the prohibitions of domestic surveillance.

HTML tutorial

Tags :

Search

Popular Posts


Useful Links

Selected menu has been deleted. Please select the another existing nav menu.

Recent Posts

©2025 – All Right Reserved. Designed and Developed by JATTVIBE.