Human Rights Watch has voiced strong concerns after Google’s parent company, Alphabet, lifted its longstanding ban on the use of artificial intelligence (AI) for developing weapons and surveillance technologies. The company has revised its internal guidelines on AI, removing a key section that had previously prohibited applications likely to cause harm.
In response to the change, Human Rights Watch criticized the move, warning that AI’s use in military and surveillance contexts could undermine accountability in life-or-death decisions. The organization expressed concerns that AI could complicate responsibility for battlefield actions, potentially making it difficult to trace and assign blame for autonomous decisions in high-stakes situations.
In a blog post, Alphabet defended the decision, emphasizing that businesses and democratic governments should collaborate on AI technologies that support national security. The company stated that the rapid advancement of AI required an update to its original principles, published in 2018, to reflect the evolving nature of the technology.
Anna Bacciarelli, senior AI researcher at Human Rights Watch, called the shift “incredibly concerning,” highlighting the risk of AI being used in autonomous weapons systems. Bacciarelli warned that the decision signaled a troubling departure from the company’s previous stance and underscored the need for enforceable regulation and binding laws, rather than relying on voluntary corporate principles.
Alphabet’s senior vice president, James Manyika, and the head of Google DeepMind, Sir Demis Hassabis, co-authored the blog post, explaining that democracies must lead AI development based on core values such as freedom, equality, and respect for human rights. The company stated that it believes in collaboration between like-minded governments, companies, and organizations to create AI that enhances national security while protecting individuals.
As AI technology becomes more advanced, there is growing concern about its military applications. AI has the potential to revolutionize defense strategies, with experts warning that autonomous weapons systems could be capable of making life-or-death decisions without human intervention. Countries such as Russia and the United States are reportedly incorporating AI into their military operations, raising alarms about the ethical implications of such systems.
The debate over AI’s role in warfare was highlighted recently when MPs in the UK noted that AI is increasingly seen as a strategic advantage on the battlefield, particularly in conflicts such as the war in Ukraine. However, concerns persist about the unregulated use of AI-powered weaponry that could result in indiscriminate killings and violations of international law.
The policy shift comes amid Alphabet’s end-of-year financial report, which showed weaker-than-expected results. Despite a 10% increase in revenue from digital advertising, the company’s share price took a hit. Alphabet also announced a significant increase in spending on AI projects, with plans to invest $75 billion in the technology this year, marking a 29% increase in its AI budget.
As global tensions rise over AI’s military potential, critics argue that Alphabet’s decision to abandon its ethical guidelines could set a dangerous precedent for the future of AI technology.