Headline: More than 580 Google staff urge CEO to block Pentagon use of the company’s AI, citing risks of lethal autonomous weapons More than 580 Google employees — including over 18 senior staffers from principal engineers to VPs — have signed an open letter asking CEO Sundar Pichai to stop the Pentagon from using Google’s artificial intelligence technologies for military purposes. The group is calling for an immediate moratorium on deploying Google’s AI in defense applications, arguing the company’s cloud and machine-learning tools could be used to power lethal autonomous weapons systems. “We want to see AI benefit humanity; not to see it being used in inhumane or extremely harmful ways,” the letter reads. Signatories also demand greater transparency about existing Pentagon contracts and want Google to create a permanent ethics board with employee representation to vet future military partnerships. The employees point to Google’s 2018 exit from Project Maven — the Pentagon program that used AI to analyze drone footage — as precedent for rejecting defense work on ethical grounds. This internal revolt comes amid a broader industry showdown over the role of advanced AI in national security. Two months ago the Department of Defense severed ties with Anthropic after the startup refused to lift contractual limits on domestic surveillance and prohibitions on fully autonomous weapons. U.S. officials then labeled Anthropic a supply-chain risk and the White House directed federal agencies to phase out its tools; a federal judge temporarily blocked that ban in March. Despite the government’s tensions with Anthropic, reports indicate the NSA was granted access to Anthropic’s restricted Mythos Preview model for select researchers and cybersecurity teams. President Donald Trump has since suggested relations may be improving, saying Anthropic is “shaping up.” Pentagon leaders have been explicit that AI will play a central role in future combat operations — with senior military officials calling autonomous systems a core element of modern warfare strategy. The debate over whether and how commercial AI can safely be integrated into military systems is also playing out across the tech sector: Anthropic has faced scrutiny after alleged links between its tools and Iran strikes, while OpenAI has defended its own partnerships with the Pentagon despite user concerns that safety guardrails could be compromised. Why this matters to the tech and crypto communities: the episode highlights growing tensions between engineering ethics, commercial opportunity, and national-security priorities. For cloud providers and AI startups, decisions about military contracts could shape talent retention, public trust, and regulatory scrutiny. For communities focused on decentralization, governance and transparency, the demands for employee representation and clearer contract disclosures underscore a wider push for accountable tech policy as AI systems become more powerful and more entwined with state power. Read more AI-generated news on: undefined/news

