Does AI Impact as A Nuclear-Weapon-Level Concern for Google and DoD Partnership?

Undoubtedly, the ethical implications of adopting AI in military scene are significant, with autonomous weapons as an outstanding example of potential human rights abuses. That explains why Google's involvement with Project Maven triggers such a huge public backlash.
Google CEO and the controversial project with DoD
Courtesy: envzone
By | 11 min read

Without any shadow of a doubt, Google is well-renowned as one of the leading technology behemoths all over the world, offering a diverse range of products and services that span multiple industries. In addition to its core products like the search engine, advertising platform, and productivity tools, Google has also made significant investments in artificial intelligence (AI) and machine learning.

The behemoth has been at the forefront of developing new applications for AI, ranging from voice recognition software to self-driving cars. Nevertheless, Google’s involvement with the U.S. Department of Defense (DoD) has been a source of controversy, leading to a public debate about the ethical implications of using AI for military purposes. 

In particular, the company’s participation in Project Maven, a military initiative focused on deploying machine learning algorithms to analyze drone surveillance footage, sparked a conflict between Google and its employees, many of whom opposed its involvement in the project. 

As AI-empowered landscape continues to evolve, it’s essential to engage in ongoing conversations about the ethical implications of this technology. Whether you’re a tech industry insider or simply interested in staying informed about the latest advancements in AI, keep reading to explore different perspectives on this controversial topic.

A.I. in the Military – The Villain Shaped by Google?

Before diving into the topic, let’s cast a glimpse over our G-star!

The giant’s dominance in the tech scene is reflected in its consistently growing revenue, with the company reporting over $279.8 billion in revenue in 2022. This impressive revenue growth can be attributed to a range of factors, including the widespread popularity of Google’s core products and services, such as its search engine and advertising platform. 

In fact, over 90% of internet users worldwide use Google’s search engine, giving the company an enormous user base and a significant influence over the way that people access and consume information online. 

Google’s AI and machine learning initiatives have also played a significant role in the company’s revenue growth. The development of AI-powered products and services has helped to diversify Google’s offerings and opened up new revenue streams in areas such as cloud computing and data analytics. 

For example, Google Cloud Platform offers a range of AI-powered services, including machine learning application programming interfaces (APIs), data analytics tools, and virtual agents, which are widely adopted by businesses and developers to build and deploy AI-driven applications. 

Additionally, its AI-powered products such as Google Assistant, Google Translate, and Google Photos have gained widespread popularity among users, laying foundation to establish Google as a leader in the development and application of AI technology.

Given Google’s impressive “scores” in the tough AI “game”, in 2017, the company’s involvement in Project Maven, a military initiative focused on using machine learning algorithms to analyze drone surveillance footage, drew significant criticism and raised concerns about the ethical implications of leveraging AI for military purposes.

In July 2017, the US Department of Defense’s Project Maven began deploying Google’s TensorFlow AI systems to analyze drone footage using machine learning and AI algorithms. The primary aim of the project was to have AI analyze the video footage, identify objects of interest, and flag them for human review.

US air force drone in action
Courtesy: DoD

Drew Cukor, chief of the DoD’s Algorithmic Warfare Cross-Function Team, said in July: “People and computers will work symbiotically to increase the ability of weapon systems to detect objects. Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”

Project Maven is part of the DoD’s $7.4 billion investment in AI and data processing, and the Pentagon has partnered with academic and AI experts to support the project. The initiative has already been put into use against the Islamic State, reportedly producing significant results in the detection of objects and potential targets in drone footage.

However, as previously mentioned, the decision by Google to assist Project Maven has sparked significant internal debate within the company, with many employees expressing concerns about the ethical implications of using AI in military operations. These concerns have been raised by both employees and external groups, and are multifaceted in nature.

One major concern was that the project could lead to the development of autonomous weapons, which would be able to identify and engage targets without human input. This was seen as a significant risk because autonomous weapons could potentially violate international humanitarian law and lead to the loss of civilian life.

A letter signed by a group of AI researchers, which was presented at the International Joint Conference on Artificial Intelligence in Melbourne, Australia, stated that “autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.”

Some argued that Google’s participation in the project was at odds with the company’s own principles of “Do the right thing” and “Don’t be evil”, which had been established as guiding values for the company.

A study by the International Committee for Robot Arms Control found that the development of autonomous weapons could potentially violate international humanitarian law and lead to the loss of civilian life. The study argued that “autonomous weapons have the capacity to make choices about the use of lethal force with minimal human intervention, raising ethical and legal questions about their development and use.”

In addition to the risk of developing autonomous weapons, there were concerns that the technology developed for Project Maven could be used to target individuals and carry out military operations without human oversight. 

In a report, the Special Rapporteur stated that “the use of armed drones to carry out targeted killings presents a grave challenge to the right to life…[and] also raises fundamental questions about the protection of human rights in the context of new technology.” 

This was seen as particularly problematic because it could lead to a lack of accountability for military actions and could potentially violate individuals’ human rights. 

Google engineers work on projects
Courtesy: Google

The potential for the technology to be used in this way raised questions about the role of technology companies in shaping the development and use of AI, and sparked a broader conversation about the need for ethical guidelines and regulations around the use of these technologies.

The Wave Against the Cloak’s Getting Stronger

Fei-Fei Li was a prominent voice in the debate over Google’s involvement in Project Maven. As the former chief scientist for AI at Google Cloud, she played a key role in shaping the company’s approach to AI and had significant influence within the organization.

The rock star scientist expressed concerns that Google’s involvement in Project Maven could lead to a negative public perception of the company and could harm its reputation as a leader in the AI industry. 

In an internal email obtained by The New York Times, Li wrote that “the technology we create can be weaponized and used to harm innocent people, or for unethical purposes,” and that “we absolutely need to be careful about what we are building and how it is being used.”

Li’s concerns were shared by many other Google employees. The controversy ultimately led to thousands of employees signing a letter calling on Google to withdraw from the project. 

In June 2018, Fei-Fei Li and more than 4,000 Google employees signed a letter protesting the company’s involvement in Project Maven and argued that the company’s involvement in Project Maven violated the company’s own ethical standards for AI development. 

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter stated. “We join with numerous academic researchers who oppose this deal, including AI experts, relevant to this technology, who question Google’s moral and ethical responsibility in this regard.”

The letter also called for Google to establish a clear policy stating that the company will not develop technologies for warfare or surveillance that violate internationally accepted norms. 

“We cannot outsource the moral responsibility of our technologies to third parties. Google’s stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever. This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US government in military surveillance–and potentially lethal outcomes–is not acceptable.”

DoD staff collaborate in a project
Courtesy: DoD

In response to the internal and external backlash, Google initially defended its involvement in Project Maven, stating that the company’s work on the project was focused on “non-offensive uses of AI.” 

“We believe that Google should be a helpful and responsible partner in the use of technology and that the work we are doing on Project Maven is important and positive.” Google CEO Sundar Pichai stated.

Despite these reassurances, the company faced continued pressure from employees and outside groups, who argued that the development of AI technology for military purposes carried significant ethical implications and risks.

The controversy ultimately led to a number of high-profile resignations, with several Google employees citing the company’s involvement in Project Maven as a key factor in their decision to leave.

One of the most high-profile resignations was that of Google employee Meredith Whittaker, who had been involved in the company’s AI ethics research efforts.

In a statement on Twitter announcing her departure, Whittaker stated that “the company has been pursuing partnerships with the military, and directly profiting from proliferation of global military and border policing technology…I can no longer in good conscience work at a company that is profiting off of human rights abuses.”

Other Google employees who resigned in protest of the company’s involvement in Project Maven included Laura Nolan, a software engineer, and Liz Fong-Jones, a site reliability engineer.

The on Time Take Eases the Righteous Clan

Amidst its identity crisis with big pressure from both employees and outside groups, Google announced in June 2018 that it would not renew its contract with the Department of Defense for Project Maven. The decision came after a series of internal discussions and debates over the ethical implications of developing AI technology for military applications.

According to The New York Times article published on June 1, 2018, “the decision [to end involvement in Project Maven] followed a rebellion by thousands of employees who signed a letter asking Google to cancel the Maven contract and institute a policy against working with military and intelligence services in the development of artificial intelligence.”

Google Cloud boss speaks at a conference
Courtesy: Press associate

Diane Greene, then-CEO of Google Cloud, re-confirmed that the company would not renew the contract after it expired in 2019. Greene stated that “we are not going to be working on any AI that has offensive military uses,” and that the company would develop a set of ethical principles to guide its work on AI.

In response to the controversy, Google announced that it would develop a set of ethical principles to guide its work on AI. 

Google’s CEO Sundar Pichai stated: “We recognize that such powerful technology raises equally powerful questions about its use. As a leader in AI, we feel a deep responsibility to get this right.” 

He also emphasized the importance of transparency and accountability in the development and use of AI. He stated that the company would “enhance our AI principles, develop new tools and resources, and engage with experts in academia, civil society, and the technology industry to continue improving Google’s work in this space.”

Google’s ethical principles for AI, which were published in June 2018, covered seven core tenets, including a commitment to “be socially beneficial”, “avoid creating or reinforcing unfair bias”, and “be accountable to people.” The principles also stated that Google would not develop AI for use in weapons or other technologies designed to cause harm.

However, the decision of ending AI Military Contract came just one week after the board was announced, following criticism from employees and outside groups over the inclusion of Kay Coles James, president of the conservative think tank The Heritage Foundation, who had expressed controversial views on race and gender.

According to The Verge, Google employees expressed concerns about James’ designation, citing her views on climate change, immigration, and LGBTQ rights, as well as her leadership of The Heritage Foundation. Some employees argued that James’ appointment was inconsistent with Google’s stated commitment to diversity and inclusion.

Google engineers collaborate on projects
Courtesy: Google

The Verge also reports that more than 1,000 Google employees signed a petition calling for James’ removal from the board, citing her “vocally anti-trans, anti-LGBTQ, and anti-immigrant” views.

“The appointment of James was not adequately vetted, and we, as Google employees, need to know why. We oppose this appointment and call on Google to remove James from the ATEAC [Advanced Technology External Advisory Council] immediately.” – a group of Google employees circulated an internal letter calling for James’ removal from the board.

The Cohabitation of AI and Human Rights in Military House

The controversy over Google’s involvement in Project Maven also attracted attention from prominent figures in the technology industry and beyond. 

Elon Musk, the eminent CEO of SpaceX and Tesla, was among those who expressed concerns about the project, tweeting in May 2018 that “AI for death is a budget priority for the Pentagon.” Musk has long been an advocate for careful consideration of the risks and benefits of AI technology, and his comments on Project Maven highlighted the potential implications of developing AI for military applications.

Other figures in the technology industry, such as the philosopher Nick Bostrom, also weighed in on the controversy. He wrote that “the most obvious hazard of military AI is that it may not work as intended.” He pointed out that even the best-designed systems can fail, and that the use of AI in military contexts could increase the risk of unintended consequences or accidents.

Bostrom also argued the development of AI for military purposes should be subject to careful scrutiny and ethical guidelines, stating that “AI should be developed within an ethical framework that takes into account its potential effects on human lives and society.” 

The controversy also attracted the attention of politicians and government officials. In June 2018, a group of US lawmakers sent a letter to Google CEO Sundar Pichai expressing concern about the company’s involvement in Project Maven and calling on the company to explain its plans for developing AI technology for military purposes. The letter did call for greater transparency and accountability in the development of AI technologies.

Lawmaker in a debate with Google
Courtesy: Press associate

One of the lawmakers who signed the letter, Rep. Jackie Speier (D-CA), stated that “Google’s work on Project Maven…is deeply troubling and risks implicating [the company] in the questionable practice of targeted killings.” Speier argued that the development of AI technology for military purposes should be subject to greater oversight and regulation.

The letter from lawmakers followed a similar letter from a group of international AI and robotics experts, who called on Google to withdraw from Project Maven and urged governments to establish a global ban on the development of autonomous weapons.

Whilst Google’s identity crisis linked to Project Maven appears to have ended with the company’s decision to not renew its contract with the DoD, the debate over the ethics of developing AI technology for military applications continues to be a topic of discussion within the technology industry and among policymakers. 

One example of ongoing discussions around the ethics of AI technology in military applications is the establishment of the Global Partnership on Artificial Intelligence (GPAI), a group of governments and organizations that have come together to address the responsible development and use of AI. The GPAI includes members such as the United States, Canada, France, Japan, and the European Union, and has identified topics such as military uses of AI as an area of focus for its work.

Within the technology industry, companies such as Microsoft and Amazon have also faced criticism and pressure from employees and outside groups over their involvement in military AI projects.

In particular, in 2018, a group of Microsoft employees wrote an open letter to the company’s CEO, Satya Nadella, calling on the company to cancel its contract with the US military for a project involving the development of AI for use in military drones:

“We are a global coalition of Microsoft workers, and we refuse to create technology for warfare and oppression. We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools we built. We did not sign up to develop weapons, and we demand a say in how our work is used.”

Bottom Lines

The controversy over Project Maven highlights the complex ethical questions raised by the development of AI technology and the need for ongoing discussion and debate among stakeholders to ensure that its benefits are realized in a way that is both responsible and accountable.

It serves as a reminder of the need for careful consideration of the potential risks and benefits of AI technology, as well as the importance of engaging in open and transparent dialogue about the ethical implications of its development and use.

By prioritizing ethical considerations and engaging in open discussions about the responsible use of AI, we can work to ensure that the benefits of this transformative technology are realized in a way that is socially beneficial and respectful of human right.

  • About: Katie Le
    Join EnvZone as a Section Editor and Analyst, Katie Le manages her section’s content production from identifying and assigning content ideas up to the publication stage. Katie Le has been…