A Rising Call for Responsible Artificial Intelligence
Winn Hardin | February 04, 2016Elon Musk raised eyebrows in 2014 when he described artificial intelligence as “our biggest existential threat.” Bill Gates voiced similar apprehension about super intelligence, saying he didn’t understand “why some people are not concerned.” Then there was Stephen Hawking’s blatant assessment that success in creating human-level AI would be “The biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”
Indeed, all signs point to an artificial intelligence revolution. Automakers like Ford, Toyota, Daimler, Volkswagen, BMW and Musk’s Tesla have entered the race for a fully autonomous vehicle. Google, meanwhile, says its driverless cars will be ready for the public by 2020. Robots that provide care to the sick and the elderly are within reach. And the U.S. military is exploring autonomous “ambulance” drones that lift injured soldiers from the battlefield and transport them to the hospital.
Skeptics such as Rodney Brooks, founder of the collaborative robot maker Rethink Robotics, have questioned the notion of AI as a doomsday conduit. People are “grossly overestimating the real capabilities of machines today and in the next few decades,” Brooks wrote in a 2015 blog for Edge.org. “The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded.”
Although the rise of machines over their human overlords may not be imminent, or even realistic, “that’s not to say that we do not have to be concerned about the technology,” says Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology. As AI becomes more sophisticated and embedded in society, researchers, tech leaders and governmental agencies are beginning to lay the groundwork for its responsible design and use.
Morality of Machines and Their Makers
With the rapid development of AI comes questions about the ethical obligations of both the designer and the technology itself. For instance, what happens when a fully autonomous vehicle swerves to avoid an accident but kills a pedestrian in the process? How would an artificially intelligent weapon be able to ascertain the difference between a combatant and a civilian armed for self-defense? How do you design a semi-autonomous robot to help an elderly woman at home that respects her privacy yet does not put her at risk by doing so?
Arkin sees several approaches to the morality quandary of AI. “Some argue that the designers themselves must be ethical and moral to be able to produce ethical autonomous agents,” he says. As a result, Arkin’s research focuses on ensuring that AI systems comply with the existing human morality and adhere to those codes.
Still another camp is investigating ways that the machines themselves can develop morality. “That kind of work is still very early, but it is important to do it now before we have problems later,” Arkin says.
Researchers are being bolstered by several new nonprofit groups that are exploring AI’s opportunities and risks. Counting Skype founding engineer Jaan Tallinn and MIT physics professor Max Tegmark among its founders, the Future of Life Institute (FLI) focuses on how to make AI systems “robust and beneficial” while avoiding potential pitfalls.
With the help of a $10 million donation from Elon Musk, FLI awarded grants to 37 research teams in 2015. Funded projects include aligning super intelligence with human interests, developing meaningful human control of lethal autonomous weapons, aligning values and moral meta-reasoning, and teaching AI systems human values through human-like concept learning.
Musk has put his financial muscle to work for another nonprofit called OpenAI, whose goal is “is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” according to an open letter on the organization’s website. Musk and other Silicon Valley investors — including his PayPal cofounders Reid Hoffman and Peter Thiel — have committed $1 billion over the long-term.
Leading the organization are Chief Technology Officer Greg Brockman, the former CTO of payment processor Stripe, and Research Director Ilya Sutskever, a deep learning expert from the Google Brain team. The group says that it will share any patents they may obtain and will encourage researchers to publish their work.
The Meaning of Responsible Design
Seeing the accelerating pace in the development of service robotics, Noel Sharkey and Aimee van Wynberghe co-founded the Foundation for Responsible Robotics. “We do not want to rush headlong into a robotics revolution without considerable forethought about how such a potentially disruptive technology might impact our society and our values,” says Sharkey, AI and robotics professor at the University of Sheffield.
Sharkey and van Wynsberghe set up the foundation to bridge the gap between the typical “discursive and philosophical” discussions about robot ethics and concrete action. The organization aims to develop codes of conduct for responsible and accountable research, design and manufacturing practice, in addition to assisting in national and international formation of policy, laws and regulations.
The study of responsible robotics also calls for a multidisciplinary approach involving not only robotics researchers and designers but also lawyers, social scientists, philosophers, policy and law makers, and the public. Sharkey says collaboration is essential “if we are to strive for responsible and accountable developments and practice in robotics without stifling innovation or trampling on people’s research or commerce.”
Robert Sparrow, a philosophy professor and ethics researcher at Monash University in Melbourne, Australia, says that in the conversation of ethical AI, the public should have the biggest say. When engineers speculate about the ways in which people are likely to use the technology, they often get it wrong because their expertise is not in one of the social sciences. “Fundamentally, these decisions are about our collective future, and they should be made collectively and democratically,” Sparrow says.
Sharkey says that prioritizing people is critical, because maintaining progress and innovation in AI and robotics research is only possible with society’s trust. “The public needs to be assured that new developments will be created responsibly and with due consideration of their rights and freedoms,” Sharkey says.
Sparrow also calls on the AI research community to make truthful, realistic assessments of the technology and its capacities, since ethical systems require ethical researchers and designers at the helm. That includes researchers being honest about their own financial interests in developing and promoting AI.“The danger here is that we will end up listening to people who have a lot of money at stake when it comes to our believing them,” Sparrow says.
A designer’s ethical responsibilities should drive all decisions in the development of AI and robotics, says Tim Austin, P.E., president of the National Society of Professional Engineers. Researchers and engineers “must be aware of blind spots, to discover and understand what they don't know and to expect the unexpected,” Austin says. He suggests they also ask themselves one essential question: Just because we can do something, should we?
Communicating the Consequences
While it’s difficult to pinpoint specific consequences resulting from a lack of careful ethical consideration, many applications require attention in the near-term. Sharkey cites the care of children, the sick and the elderly to ensure that service robots don’t violate their rights and dignity.
One of Sparrow's research projects evaluates claims surrounding the usefulness of robots in the aged-care setting. “One of the worst threats facing us in our old age is loneliness,” he says. “Replacing human care with robots is going to be a very bad thing because there’s less opportunity for human contact.”
Issues surrounding the civilian and military use of drones continue to rise. “We are now seeing drones being armed with so-called less-than-lethal weapons such as pepper spray and Tasers for police use in public protests,” Sharkey says.
The push to prohibit autonomous robot weapons remains strong, even as the U.S. Marine Corps recently shelved a four-legged robotic “pack mule” designed to lighten soldiers’ loads and the rate of drone crashes climbs.
As AI’s potential impact on society unfolds, it is up to the researchers and designers behind these systems to continue the development of and discourse on responsible AI. “We do not need to be in fear, but we need to consider artificial intelligence and robotics on a rational basis and find a way forward,” Georgia Tech’s Arkin says. “That is the bottom line, really.”