Google bans development of artificial-intelligence that could be used for weapons, CEO says

The technology company has responded to criticism and employee resignations over a contract with the US Defence Department, which critics argued pushed Google closer to the 'business of war'

Drew Harwell
Friday 08 June 2018 19:05 BST
Comments
Google CEO Sundar Pichai announced new ethical guidelines at the company's I/O conference in California
Google CEO Sundar Pichai announced new ethical guidelines at the company's I/O conference in California (AP)

Google is banning the development of artificial-intelligence (AI) software that can be used in weapons, chief executive Sundar Mr Pichai said, setting strict new ethical guidelines for how the tech giant should conduct business in an age of increasingly powerful AI.

The new rules could set the tone for the deployment of AI far beyond Google, as rivals in Silicon Valley and around the world compete for supremacy in self-driving cars, automated assistants, robotics, military AI and other industries.

"We recognise that such powerful technology raises equally powerful questions about its use," Mr Mr Pichai wrote in a blog post. "As a leader in AI, we feel a special responsibility to get this right."

The ethical principles are a response to a firestorm of employee resignations and public criticism over a Google contract with the US Defence Department for software that could help analyse drone video, which critics argued had nudged the company one step closer to the "business of war." Google executives said last week that they would not renew the deal for the military's AI endeavour, known as Project Maven, when it expires next year.

Google, Mr Mr Pichai said, will not pursue the development of AI when it could be used to break international law, cause overall harm or surveil people in violation of "internationally accepted norms of human rights."

The company will, however, continue to work with governments and the military in cybersecurity, training, veterans health care, search and rescue, and military recruitment, Mr Mr Pichai said. The Web giant - famous for its past "Don't be evil" mantra - is in the running for two multi-billion dollar US Defence Department contracts for office and cloud services.

Google's $800bn parent company, Alphabet, is considered one of the world's leading authorities on AI and employs some of the field's top talent, including at its London-based subsidiary DeepMind.

But the company is steeped in a fierce competition for researchers, engineers and technologies with Chinese AI firms and domestic competitors, such as Facebook and Amazon, who could contend for the kinds of lucrative contracts Google says it will give up.

Stephen Hawking has a terrifying warning about artificial intelligence

The principles offer limited detail into how the company would seek to follow its rules. But Mr Pichai outlined seven core tenets for its AI applications, including that they be socially beneficial, be built and tested for safety, and avoid creating or reinforcing unfair bias. The company, Mr Pichai said, would also evaluate its work in AI by examining how closely its technology could be "adaptable to a harmful use."

AI is a critical piece of Google's namesake Web tools, including in image search and recognition, and automatic language translation. But it is also key to its future ambitions, many of which involve ethical minefields of their own, including its self-driving Waymo division and Google Duplex, a system that can be used to make dinner reservations by mimicking a human's voice over the phone.

But Google's new limits appear to have done little to slow the Pentagon's technological researchers and engineers, who say other contractors will still compete to help develop technologies for the military and national defence. Peter Highnam, the deputy director of the Defence Advanced Research Projects Agency, the Pentagon agency that did not handle Project Maven but is credited with helping invent the Internet, said there are "hundreds if not thousands of schools and companies that bid aggressively" on DARPA's research programmes in technologies such as AI.

"Our goal, our objective, is to create and prevent technological surprise. So we're looking at what's possible," John Everett, a deputy director of DARPA's Information Innovation Office, said in an interview on Wednesday. "Any organisation is free to participate in this ongoing exploration or not."

The Washington Post

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in