[email protected] +603-2181 3666
GOOGLE TACKLES AI PRINCIPLES: IS IT ENOUGH?
June 13, 2018
0

Google has released its manifesto of principles guiding its efforts in the artificial intelligence realm – though some say the salvo isn’t as complete as it could be.

AI is the new golden ring for developers, thanks to its potential to not just automate functions at scale but also to make contextual decisions, based on what it learns over time. This experiential aspect has the capacity to bring immense good to the proceedings of life, in the form of weeding out cyber-threats before they happen, offering smarter recommendations to consumers and improving algorithms, even tracking wildfire risk and monitoring the environments of endangered species – or, on the back-end, it can speed along manufacturing processes or evaluate open-source code for potential flaws.

What we don’t want, of course, is a Matrix-y, Skynet-y, self-aware network interested in, say, enslaving humans.

Google is looking to thread this needle with its latest weigh-in on the AI front, its principles for guiding AI development. The company uses AI to filter spam from email and to power its digital assistant for Google Home; and it incorporates it into cool things such as overlaying augmented reality to photos to point out items of interest. As CEO Sundar Pichai said in a post Thursday, the company “invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code.”

Given this, the company has released seven values to guide AI work, which Pichai said the company is approaching “with humility.”

These are the equivalent of Isaac Asimov’s three rules of robotics, but Google is doing the sci-fi hero four better. Asimov famously postulated: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given it by human beings except where such orders would conflict with the First Law; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The internet giant seems to have an eye towards these, along with risk-reward, with its seven tenets, which are: Be socially beneficial; avoid creating or reinforcing unfair bias; be built and tested for safety; be accountable to people; incorporate privacy design principles; uphold high standards of scientific excellence; be made available for uses that accord with these principles.

Google’s AI Principles: Addressing Unintended Consequences

That AI should be socially beneficial, be designed with safety checks, and be accountable to people speak to the dark robotic overlord trope in many ways.

Google’s AI technologies will be “subject to appropriate human direction and control,” Pichai said, adding that “We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm….In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.”

He also noted on the first point that development will “take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.”

All of this is important, given that the potential for things to go wrong exists, according to some of the great minds out there. Tesla CEO Elon Musk for instance told the National Governors Association last summer that AI poses a “fundamental risk to the existence of human civilization.”

Bill Gates is on record too: “First the machines will do a lot of jobs for us and not be super-intelligent. That should be positive if we manage it well,” he said during a Reddit Ask Me Anything Q&A a few years ago. “A few decades after that though, the intelligence is strong enough to be a concern.”

Stephen Hawking also weighed in before his passing: “The development of full artificial intelligence could spell the end of the human race,” he told the BBC.

Unintended consequences do happen: last year, the research team at Facebook Artificial Intelligence Research (FAIR) pulled the plug on an AI project after the AI developed its own language. FAIR built two AIs, named Alice and Bob, who were tasked with learning how to negotiate between themselves to trade hats, balls and books, by observing and imitating human trading and bartering practices. Over the course of the testing, the chatbots decided that English was inefficient, and began stripping out what they saw as superfluous linguistics until they had their own babble-talk.

“There was no reward to sticking to English language,” Dhruv Batra, visiting research scientist from Georgia Tech at FAIR, speaking to Fast Company. “Agents [the AIs] will drift off understandable language and invent codewords for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create short-hands.”

Batra added, “It’s important to remember, there aren’t bilingual speakers of AI and human languages.”

So Facebook went back to the drawing board and programmed a new requirement for AIs to use English-language conversations.

Pichai noted that many technologies have multiple uses, and said that Google will work to limit potentially harmful or abusive applications by evaluating the “primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use.”

AI applications that Google won’t pursue include those where there is a material risk of harm, including weapons or other technologies whose “principal purpose or implementation is to cause or directly facilitate injury to people.”

Privacy Concerns

When it comes to incorporating privacy by design, Pichai noted: “We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.”

In the harmful applications arena, he also pledged that Google wouldn’t participate in developing “technologies that gather or use information for surveillance violating internationally accepted norms,” nor “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

Privacy watchdog Electronic Frontier Foundation unsurprisingly weighed in on these, citing a few concerns.

In a post, EFF acknowledged that “on many fronts, the principles are well thought-out and promising. With some caveats, and recognizing that the proof will be in their application by Google, we recommend that other tech companies consider adopting similar guidelines for their AI work.”

Concerns however include a fear that Google hasn’t committed to a third-party, independent review to ensure the AI principles are actually implemented.

“Without that, the public will have to rely on the company’s internal, secret processes to ensure that these guidelines are followed,” EFF said.

Another concern is with the language, “widely accepted principles of international law and human rights.” This is too vague given that human rights principles are too unsettled in this arena, EFF pointed out.

“It is not at all settled — at least in terms of international agreements and similar law —  how many key international law and human rights principles should be applied to various AI technologies and applications,” the group said. “This lack of clarity is one of the key reasons that we and others have called on companies like Google to think so hard about their role in developing and deploying AI technologies, especially in military contexts.”

And indeed, Google has landed in controversy thanks to its participation in Project Maven, which is a Department of Defense effort to use object recognition and machine learning for military purposes. Google had become a contractor for that project, but the level of its involvement is murky, according to reports.

Thus, EFF also offered specific guidance for Google and others to follow when it comes to surveillance: “We want to hear clearly that those include the Necessary and Proportionate Principles, and not merely the prevailing practice of many countries spying on the citizens of almost every other country. In fact, in the light of this practice, it would be better if Google tried to avoid building AI-assisted surveillance systems altogether.”

Others are feeling cynical about the declaration of principles. Mike Banic, vice president at Vectra, told Threatpost thatactions will speak louder than words.

“The coincidence of this blog publishing roughly two weeks after the news about Alexa sending a secretly recorded conversation to someone via email may bring out the cynic in some readers,” he told us. “This outcome was likely unintentional, but the public’s reaction affects their view of the maker’s brand. AI does have powerful implications like reducing cybersecurity workload and increasing human efficacy. In addition, the way vendors implement and deliver AI-based solutions is key to building trust in their brand. My recommendation is judge vendor by their actions rather than their words.”

Threatpost reached out to Google for comment, and will update this story with any statements.