Google Makes a Vague Pledge to Limit Work on Artificial Intelligence in Weapons, Surveillance
Following months of controversy over a joint artificial intelligence project with the Pentagon, Google said on Thursday that it would refuse to purse any initiatives that are âlikely to cause overall harmâ including many kinds of weapons and surveillance.
The new principles follow months of debate inside Google over AI technology it had developed for the U.S. military for analyzing drone footage as part of what was known as Project Maven.
Thousands of Google employees signed a petition in April calling on CEO Sundar Pichai to cancel the partnership. The following month, dozens of workers resigned in protest from the company.
Under pressure, Google decided against renewing the contract, and Pichai vowed to clarify Googleâs policies.
âWe recognize that such powerful technology raises equally powerful questions about its use,â Pichai wrote in introducing seven principles âto guideâ the companyâs future work.
The principles include aims such as safety, accountability, privacy, avoiding unfair bias, and being âsocially beneficial.â In addition, Pichai outlined four areas where Google will not develop or deploy AI.
Pichai said Google may work with the military in other areas, including cybersecurity, training, and veteransâ healthcare. Beyond that, the memoâs wording is vague enough to raise questions about how and when it will apply.
Only weapons that have a âprincipal purposeâ of causing injury will be avoided, but itâs unclear which weapons that refers to. Similarly, the internationally accepted norms arenât specified, with the international community entering a time in which the U.S. is rewriting many norms.
CNBC also noted that Pichaiâs vow to âwork to limit potentially harmfu l or abusive applicationsâ is less explicit than previous Google guidelines on AI. Google reportedly said the wording changed because the company canât control all uses of its AI technology.Source: Google News US Business | Netizen 24 United States