Affiliate links on Android Authority may earn us a commission. Learn more.
Google to continue working with military, but will avoid nukes and spying
- Google has published a blog post outlining a new set of AI principles.
- Google released the guidelines soon after it said it would stop working with the military on the controversial Project Maven.
- The company says that while it won’t design AI for use in technology that causes harm, it will still work with the military on other projects.
Google has unveiled a set of principles that it will use to steer its work with AI. In the guidelines, the company promised that it will not design AI for use in weapons, surveillance or technology that causes harm. It did say that it will continue its governmental and military work in other areas, however.
The guidelines, titled “AI at Google: our principles,” are made up of seven AI objectives and a section about AI applications the company will not pursue. In the post, Google also acknowledges the “significant impact” AI will have on society and said that it feels a “deep responsibility” to get AI right.
The guidelines come soon after the controversy surrounding Google’s work on Project Maven, a military contract to provide AI to analyze drone footage. This work led over 3,000 Google employees to sign a letter asking the company to end the project. Later, at least 12 Googlers quit the company over the project.
While Google always said this work was not for use in weapons, the project may have fallen foul to the new restrictions, as Google said it will no longer continue with Project Maven after its current contract ends. Military applications Google will still pursue include those in “cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”
While the principles were released soon after the protests over Project Maven, the guidelines also cover a wide range of the concerns people have over AI. Google’s seven AI objectives are to:
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy designed principles
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
Google CEO Sundar Pichai wrote the post, and he seems both cautious and optimistic about the uses of AI. Earlier in the year, he described it as like fire, something that we can harness for the benefit of humanity.
You can see the full set of AI principles released by Google over at its official blog.
Next up: Google to end controversial military AI work after employee backlash