Google staff push back on “black box” military AI deals

Hundreds of Google employees are again forcing the company to confront where it draws the line between commercial AI and the business of war. In a new open letter to CEO Sundar Pichai, more than 580 workers from Google Cloud and DeepMind – including over 20 directors and VPs – urge the company to refuse proposed Pentagon contracts that would deploy its Gemini models on air‑gapped classified networks, arguing that once Google’s systems run in secret, the company has no way to see whether they are being used for autonomous weapons or mass surveillance.

According to the organisers of the letter, the document has garnered over 580 signatures (other sources put it at "around 600"), with roughly two-thirds agreeing to release their names and a third wishing to remain anonymous.

What specifically are the employees demanding

In the letter, which was delivered to Pichai on Monday, the employees write that they are troubled by ongoing negotiations between $GOOG and the U.S. Department of Defense over the deployment of Gemini/AI systems in classified projects. As AI experts, they warn that these systems "centralize power and make mistakes" and that their proximity to the technology gives them a responsibility to point out the most dangerous and ethical uses.

A key passage in the letter, according to published quotes, says that "the only way to ensure that Google is not associated with such harms is to reject any classified workloads" for the military. The employees warn that AI could otherwise be used in autonomous weapons or mass surveillance without their knowledge or ability to intervene.

The authors of the letter also argue reputational risk: a bad decision at this point, they say, could "irreversibly damage Google's reputation, its business and its role in the world". The company has not yet commented publicly on the letter, but according to media reports, it has been considering a deal with the Pentagon for several weeks, which would build on the department's dispute with Anthropic.

Context: the Pentagon's dispute with Anthropic and the "open space" for other AI firms

Tensions over the use of AI in the military have escalated after the US Department of Defense branded Anthropic and its Claude model a "supply chain risk" and began pushing the firm out of military projects. The designation meant that the Pentagon and its contractors were not allowed to use Claude in military contracts, and Anthropic is fighting back in the courts, arguing that it is retaliation for refusing to allow the use of AI "for all lawful purposes", including fully autonomous weapons.

So far, the court has temporarily blocked the department from taking some action, but Anthropic is currently effectively shut out of defence contracts, which creates space for other big players - typically Microsoft $MSFT, OpenAI or just Google. According to The Information, the Pentagon is in talks with Google to deploy its AI (Gemini) in a classified environment, which would partially fill the hole left by Anthropic.

Now the staff is responding: in a letter, they explicitly say they "do not want to fill the void left by Anthropic" if it means using AI for questionable military purposes. According to Gizmodo, the signatories include people from DeepMind, Google Cloud, and other teams, i.e., those directly working on large-scale models and infrastructure.

Flashback to Project Maven in 2018

The current petition picks up on an earlier, very sensitive chapter in Google's history: the 2018 Project Maven protest. That's when thousands of employees opposed a collaboration with the Pentagon to use machine learning to analyze drone footage and improve strike targeting, with some of them leaving the company because of it.

A 2018 letter at the time asked Google to "not be in the business of war" and to adopt a clear policy of not developing technology for the purpose of conducting combat operations. After strong internal criticism, the company did not renew Maven's contract and promised stricter AI ethical principles - it is this tradition that today's signatories refer to.

The current letter thus tests how serious Google is about these principles in the era of generative AI, where models can be deployed much more broadly - from intelligence processing and operations planning to potentially autonomous weapons systems.

What's in play for Google

From a business perspective, it's a delicate balance between the lucrative but reputationally risky "defense AI" segment and the image of a company that wants to operate as a "responsible" player in AI. Military and government contracts in AI can bring long-term contracts and stable revenues, but they also open the company to harsh criticism from employees, the public and some foreign partners.

Repeated internal revolts can:

  • Complicate the recruitment of top researchers, who often choose employers based on project ethics as well

  • push management to turn down some lucrative contracts, thereby "ceding" part of the market to competitors

  • set a precedent: once management relents, employees will use similar tools (open letters, petitions) more often

For Alphabet shareholders, it is therefore essential to follow two parallel lines: how the Pentagon-Anthropic dispute develops (i.e., how big is the overall "window of opportunity" in the AI defense) and what conclusion Pichai and management will draw from the current internal pressure. The combination of regulation, reputation, and internal culture here may be as important to the value of the company as the technology itself.


No comments yet
The information in this article is for educational purposes only and does not serve as investment advice. The authors present only facts known to them and do not draw any conclusions or recommendations for readers. Read our Terms and Conditions
Menu StockBot
Tracker
Upgrade