Automation and Control

Google Reveals Company AI Principles in Wake of Project Maven Exit

14 June 2018

Source: Drew Tarvin/CC BY 2.0Source: Drew Tarvin/CC BY 2.0

Google CEO Sundar Pichai announced the release of seven guiding principles for AI development at the company on June 7.

Pichai originally announced the planned development and release of the principles earlier in June, when the company announced that it would not renew its controversial U.S. Department of Defense contract to develop drone-based face recognition technology. In late May, 4,000 Google employees signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

According to Pichai’s blog post, AI at Google should:

1) Be socially beneficial

2) Avoid creating or reinforcing unfair bias

3) Be built and tested for safety

4) Be accountable to people

5) Incorporate privacy design principles

6) Uphold high standards of scientific excellence

7) Be made available for uses in accordance with the above six principles

The company also pledged to avoid developing AI systems that could cause harm, be used in weapons or surveillance, and those in breach of international law and human rights.

The principles address a wide range of recent controversies involving artificial intelligence. The most visible one involved Google’s planned exit from the U.S. Department of Defense’s Project Maven following widespread employee protests. Dozens of Google employees quit the company earlier in 2018 to protest the company’s involvement in military technology.

The second principle addresses research into search bias like that conducted by University of Southern California professor Safiya Noble. Noble’s book “Algorithms of Oppression,” released this year, argues that algorithms may exhibit biases inherent in their creators, and that these biases may affect marginalized groups.

While Pichai made clear that Google will not be involved in designing or deploying weaponized AI, the company will continue other military and government contracting.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Pichai said. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”

“These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”

To contact the author of this article, email jonathan.fuller@ieeeglobalspec.com


Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the Engineering360
Stay up to date on:
Our flagship newsletter covers all the technologies engineers need for new product development across disciplines and industries.
Advertisement
Advertisement

Upcoming Events

Advertisement