3,100 Google employees and dozens of senior engineers slammed the search giant’s involvement in U.S military projects such as Project Maven. Workers expressed in their statement their desire to not participate in the building of warfare technology, as they believe Google should not be in the business of war.
The New York Times says Google staff has raised concerns among senior management inside offices worldwide, with around 88.000 employees around the globe that have made petitions to stop military involvement. Diane Greene, head of Google’s cloud business, responded to the employees by assuring that Google’s involvement will not get to weaponry or drone operation.
The response, however, did not serve to calm the petitioners, as previous projects with Google’s participation helped the improvement of drone targeting. Also, workers expressed their concerns in a letter that exposes a variety of reasons to believe that the search giant is putting user’s trust at risk by “ignoring its moral and ethical responsibility”.
At a minimum, any company or any AI researcher considering whether to work with the military on a project with potentially dangerous or risky AI applications should be asking these questions. https://t.co/03LM59LNls
— EFF (@EFF) April 6, 2018
Googlers ask the company to not enter the business of war
The controversial involvement of the company in Project Maven came as workers became highly concerned with defense Secretary Jim Mattis, who often notes the goal of increasing the lethality of the U.S military.
Employees have petitioned the company to take a step back based on the following concerns.
“We cannot outsource the moral responsibility of our technologies to third parties,” the letter says. Google’s stated values make this clear: every one of our users is trusting us. Never jeopardies that. Ever. Building this technology to assist the US government in military surveillance – and potentially lethal outcomes – is not acceptable.”
Staff is concerned about Google using users’ data for the recognition of individuals, and the free usage that the Pentagon can give to the machine learning models that the company develops.
— WIRED UK (@WiredUK) April 5, 2018
Google’s official response
To the many worries presented by workers that suspect on the company putting users’ data at risk, a spokesperson communicated that the AI data to be employed is intended to flag images for further professional review and that Google’s effort is made in order to help save lives on conflicts with highly advanced technology.
Greene also guaranteed that the models used by the AI are based on unclassified data only and that Google has no offensive purposes when it comes to its participation with AI recognition software. However, it was also stated that Google is currently working on the development of policies for the proper use of AI technologies.
Source: The New York Times