
CAMBRIDGE, MA — For the past few days, Elon Musk and Mark Zuckerberg have publicly bickered over how artificial intelligence (AI) will shape the future of humanity. For Musk – the founder of Tesla, SpaceX, and PayPal – bots pose a “fundamental threat to the very existence of human civilization” and governments must “start regulating the issue now” before problems arise.
Zuckerberg took to Facebook from his backyard in Palo Alto last Sunday when asked about the position of Musk, who has been around for months. “Negative people and creating doomsday scenarios — I just don’t get it,” Zook said over the barbecue. “It’s very negative and, quite frankly, completely irresponsible.”
Upon seeing the comments, Musk responded on Twitter, “I’ve already spoken to Mark about this. Your understanding of this issue is limited.” Sit down, Claudia, Silicon Valley style. Depending on the interviewer, AI is a promise or a threat, a blessing or a curse.
To find out what works and what doesn’t in the brave new world of AI, Google researchers – including the Brazilian scientist – just launched PAIR (short for “People + AI Research”), an initiative to improve how people and advanced interact AI systems.
Google CEO Sundar Pichai often says that the world is moving from mobile first to AI first and has already set up a venture capital arm to invest in AI startups.
AI is already being used throughout the Google ecosystem — from YouTube video recommendations to Google Translate translations to the main work of the house: search. As WIRED pointed out, “All smart assistants like Apple’s Siri and Google Assistant are capable of producing results that are so frustrating they seem like fake stupidity.”
While traditional programs have to anticipate all possibilities of interaction with the user and the devices on which they run (and if information occurs outside of the programmed range, the program returns false), artificial intelligence programs use a combination of the initial one Set of data to “learn” the relationships between the various pieces of information you receive and a second data set to test what you have learned, thus distinguishing between failure and success.
According to Fernanda Vegas, Ph.D. from the MIT Media Lab and Google researcher at the helm of PAIR, “a lot of resources have already been devoted to developing AI systems, but not much has been done about how humans interact with those systems.” We’re talking to a kid, we have a “mental model” of what it’s doing – what it understands, what problems it can and can’t address.”
Likewise, we need to discover the mental model of intelligent machines and find the best way to communicate them to the people who use them. [AI] systems are very efficient at talking to you about things like the weather or traffic on your way to work, but they still can’t have a simple/informal conversation with the user.”
Another purpose of PAIR is to help AI programmers understand what went wrong. When it comes to “debugging” an AI, there’s little point in just looking at your code: you need to examine the data feeding the AI.
PAIR has released tools available in open source to help a programmer open the hood and understand why AI behaves in unexpected ways.
These mistakes happen in the best of families. IBM’s “supercomputer” Watson once won the popular American TV show Jeopardy! , where participants have to answer general knowledge questions.
When asked for an answer in the “American cities” category, the computer, to the surprise (and embarrassment) of IBM executives, gave the answer “Toronto” – the largest city in Canada. The programmers did not fully explain the error, but it may have been in the information that the program “took” before participating in the program.
Another case happened at Microsoft last year. The company’s artificial intelligence system learned to read and respond to tweets by simulating human reactions. But shortly after its launch, the system – chatbot Tay – took on such a biased, violent, and sexist character that it had to be shut down 16 hours later in a public relations disaster for Microsoft.
According to the company, the problem was caused by the poor quality of the first tweets Tay received, which came from internet “trolls” who harassed him with insults and profanity.
The PAIR system not only helps the human to understand the machine but also bridges the gap in the opposite direction: it helps the machine to express itself to the human. Google has created artificial intelligence tools to explain to the user how they arrived at their conclusions – in the language of researchers, increasing the “interpretability” of their answers.
For example, software that diagnoses cancer in a patient should explain didactically how it came to this conclusion. By teaching artificial intelligence to explain the reason for its actions, the PAIR researchers are counting on a better acceptance of their results by humans.
In contrast to Elon Musk, Fernanda Viégas sees the future of artificial intelligence in the harmony between man and machine. In her opinion, AI systems cannot understand what problems to try to solve, and they need humans to do that.
Likewise, humans will be able to learn from machines as long as the communication channel remains clear and open. “One of the most universal narratives around AI is the idea that from now on everything will be automated.
On the contrary, PAIR believes that the interaction between humans and artificial intelligence is fundamental and must be considered from the moment we design these systems.
We need to keep the user in mind, from the lab where these technologies are developed to where they will be widely deployed. Therefore, it is important to interpret these systems.
Because of this, it’s important to create tools that help developers understand the data they’re using to feed these systems. Therefore, it is important to create clear and consistent interactions for consumers.”