Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI.
Those principles, released by Google CEO Sundar Pichai, commit Google to building AI applications that are "socially beneficial," that avoid creating or reinforcing bias and that are accountable to people.
The announcement follows Google's reported decision not to renew a Pentagon contract in which its AI technology helped analyze drone footage.
Google recently announced Duplex, a human-sounding digital concierge that booked appointments with human receptionists in a May demonstration.
Some ethicists were concerned that call recipients could be duped into thinking the robot was human. Google has said Duplex will identify itself so that wouldn't happen.