AI algorithm with ‘social skills’ teaches humans how to collaborate
And a human-machine collaborative chatbot system
An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player games.
The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# (“S sharp”), in three types of interactions: machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” said lead author BYU computer science professor Jacob Crandall. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it’s programmed to not lie] and it also learns to maintain cooperation once it emerges.”
“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” said Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”
How casual talk by AI helps humans be more cooperative
One important finding: colloquial phrases (called “cheap talk” in the study) doubled the amount of cooperation. In tests, if human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!”, “You will pay for that!” or even an “In your face!”
And when machines used cheap talk, their human counterparts were often unable to tell whether they were playing a human or machine — a sort of mini “Turing test.”
The research findings, Crandall hopes, could have long-term implications for human relationships. “In society, relationships break down all the time,” he said. “People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better.”
The research is described in an open-access paper in Nature Communications.
A human-machine collaborative chatbot system
An actual conversation on Evorus, combining multiple chatbots and workers. (credit: T. Huang et al.)
In a related study, Carnegie Mellon University (CMU) researchers have created a new collaborative chatbot called Evorus that goes beyond Siri, Alexa, and Cortana by adding humans in the loop.
Evorus combines a chatbot called Chorus with inputs by paid crowd workers at Amazon Mechanical Turk, who answer questions from users and vote on the best answer. Evorus keeps track of the questions asked and answered and, over time, begins to suggest these answers for subsequent questions. It can also use multiple chatbots, such as vote bots, Yelp Bot (restaurants) and Weather Bot to provide enhanced information.
Humans are simultaneously training the system’s AI, making it gradually less dependent on people, says Jeff Bigham, associate professor in the CMU Human-Computer Interaction Institute.
The hope is that as the system grows, the AI will be able to handle an increasing percentage of questions, while the number of crowd workers necessary to respond to “long tail” questions will remain relatively constant.
Keeping humans in the loop also reduces the risk that malicious users will manipulate the conversational agent inappropriately, as occurred when Microsoft briefly deployed its Tay chatbot in 2016, noted co-developer Ting-Hao Huang, a Ph.D. student in the Language Technologies Institute (LTI).
The preliminary system is available for download and use by anyone willing to be part of the research effort. It is deployed via Google Hangouts, which allows for voice input as well as access from computers, phones, and smartwatches. The software architecture can also accept automated question-answering components developed by third parties.
A open-access research paper on Evorus, available online, will be presented at CHI 2018, the Conference on Human Factors in Computing Systems in Montreal, April 21–26, 2018.
Abstract of Cooperating with machines
Since Alan Turing envisioned artificial intelligence, technical progress has often been measured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less attention has been given to scenarios in which human–machine cooperation is beneficial but non-trivial, such as scenarios in which human and machine preferences are neither fully aligned nor fully in conflict. Cooperation does not require sheer computational power, but instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dispositions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate with people and other algorithms at levels that rival human cooperation in a variety of two-player repeated stochastic games. These results indicate that general human–machine cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic mechanisms.
Abstract of A Crowd-powered Conversational Assistant Built to Automate Itself Over Time
Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.
references:
Jacob W. Crandall, Mayada Oudah, Tennom, Fatimah Ishowo-Oloko, Sherief Abdallah, Jean-François Bonnefon, Manuel Cebrian, Azim Shariff, Michael A. Goodrich, Iyad Rahwan. Cooperating with machines. Nature Communications, 2018; 9 (1) DOI: 10.1038/s41467-017-02597-8 (open access)
Ting-Hao (Kenneth) Huang, Joseph Chee Chang, and Jeffrey P. Bigham. Evorus: . Language Technologies Institute and Human-Computer Interaction Institute Carnegie Mellon University. 2018. (open access)
Topics: AI/Robotics | Social Networking/Web | Social/Ethical/Legal
February 9, 2018
Language Technologies
(credit: Iyad Rahwan)
ai-algorithm-teaches-humans-how-to-collaborate