Google blocks the use of its artificial intelligence technology in weapons.
The restriction could help Google management quell months of protests from thousands of employees against the company's work with the U.S. military to identify objects in videos produced by drones.
(Reuters)- Google will not allow its artificial intelligence software to be used in weapons or in unjustifiable surveillance efforts, the company said Thursday.
The restriction could help Google management quell months of protests from thousands of employees against the company's work with the U.S. military to identify objects in videos produced by drones.
Google will pursue other government contracts, including cybersecurity, military recruitment, and search and rescue, said CEO Sundar Pichai.
"We want to make it clear that although we are not developing artificial intelligence for use in weaponry, we will continue our work with governments and the military in many other areas," the executive said.
Advances in reducing the cost and improving the performance of advanced computers have begun to move artificial intelligence from research labs to sectors such as the military and healthcare. Google and its major rivals have become leaders in sales of artificial intelligence tools.
But the potential of this technology to select drone strikes better than military experts or to identify dissidents through a large dataset collected from online communications has created concerns among academics who study ethics and among Google employees themselves.
“Taking a clear stance against the use of these technologies in weapons” will help Google demonstrate “its commitment to preserving the trust of its international customer and user base,” Lucy Suchman, a sociology professor at Lancaster University in England, told Reuters.
Google has stated that it will not pursue the development of artificial intelligence applications that could cause physical harm, are linked to surveillance, "violate internationally accepted norms that protect human rights," or present greater "material risks" than benefits.
The principles also stipulate that company employees and customers "avoid unfair impacts on people," particularly those related to race, gender, religion, sexual orientation, or politics.
Pichai stated that Google reserves the right to block applications that violate these principles.
By Paresh Dave