Machine That Could Predict Human Behavior
John Simpson | September 16, 2016According to researchers at the University of Sheffield it is now possible for machines to learn how natural or artificial systems work simply by observing them, which could lead to technology advances that allow machines to predict human behavior.
Researchers led by Dr. Roderich Gross, from the university's Department of Automatic Control and Systems Engineering, took inspiration from the work of pioneering computer scientist Alan Turing, who proposed a test that a machine could pass if it behaved indistinguishably from a human. In his test, an interrogator exchanged messages with two players in a different room—one human, the other a machine—to attempt to find out which of the two players was human. If the evaluator could not reliably tell the machine from the human, the machine was said to have passed the test and was considered to have human-level intelligence.
“Our study uses the Turing test to reveal how a given system—not necessarily a human—works," says Gross. "In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements. To do so, we put a second swarm—made of learning robots—under surveillance too. The movements of all the robots were recorded and the motion data shown to interrogators.”
Researchers applied "Turing Learning" to automatically infer the aggregation behavior of an observed swarm of e-puck robots. Image credit: University of Sheffield.Unlike in the original Turing test, however, the Sheffield University researchers' interrogators were not human but, rather, computer programs that learn by themselves. Their task was to distinguish between robots from either swarm, and they were "rewarded" for correctly categorizing the motion data from the original swarm as genuine and those from the other swarm as counterfeit. The learning robots that succeeded in fooling an interrogator—making it believe their motion data were genuine—received a reward.
The advantage of the approach, dubbed "Turing Learning," is that humans no longer need to tell machines what to look for, Gross says.
Gross believes Turing Learning could lead to advances in science and technology. “Scientists could use it to discover the rules governing natural or artificial systems, especially where behavior cannot be easily characterized using similarity metrics,” he says.
Turing Learning could be used to create algorithms that detect abnormalities in behavior, Gross says, which could prove useful for the health monitoring of livestock and for the preventive maintenance of machines, cars and airplanes. It could also be used in security applications, such as for lie detection or online identity verification.
Having tested Turing Learning in robot swarms, the researchers' next step is to reveal the workings of animal collectives, such as schools of fish or colonies of bees. This could lead to a better understanding of what factors influence the behavior of these animals and ultimately to inform policy for their protection.