More robot responsibility means more ethical concerns
Eric Olson | March 20, 2024The capabilities of robotics and autonomous systems are advancing. Developments in artificial intelligence (AI) and machine learning are driving the expansion of robot competency, expanding the breadth and depth of automation to additional sectors of the economy, from manufacturing and agriculture to healthcare and transportation.
As autonomous systems permeate work and society, they are increasingly placed in positions with decision-making duties that in the past were performed by humans. These can include life-and-death scenarios like military drones with automated authority over when to pull the trigger; self-driving cars that must choose between saving their occupants or pedestrians that dash into the road; and robot surgeons autonomously choosing incision locations in precision neurosurgery.
As robots increasingly assume responsibility in situations previously governed by human judgment, it is imperative to consider the associated ethical implications. These considerations should include scrutiny of ethical issues from multiple perspectives, including those of robot developers (how to develop ethically aligned robots), operators (how to control robots ethically) as well as the machines themselves (how to ensure robots act with moral correctness).
Ethical concerns
Fortunately, experts are engaged in numerous ongoing discussions and research around the world probing the ethical, legal and societal issues associated with robotics and autonomous systems.
The U.K. Robotics and Autonomous Systems Network (UK-RAS) compiled a list of some of the major works in this area in a recently released white paper. The report reviews several ethical concerns pertinent to the development and operation of robotics and autonomous systems that deserve careful contemplation. These include issues of opacity, oversight, deception, bias, employment, safety and privacy.
Opacity refers to the need for transparency in the decision-making processes of autonomous systems so that decisions can be investigated for fairness. This is related to the issue of oversight, which concerns operator supervision of robots. To effectively (and ethically) manage robot behavior, operators must be able to comprehend the reasoning behind robot decisions.
Deception is another issue related to opacity. The nature of robots and autonomous systems should be open and honest so as not to mislead or exploit vulnerable users. For example, there is a risk that a person might become emotionally attached to a robot without knowing that it does not actually have feelings but is only programmed to behave as if it does.
Bias can arise out of flawed algorithms or incomplete machine learning sets. For instance, a self-driving car trained with a database containing only images of people with a certain skin color might fail to recognize people with other skin colors, leading to dangerous situations.
Employment is another concern associated with autonomous systems. In the manufacturing sector, robots have been gradually replacing manual labor for decades. As AI continues to improve, robots will be able to perform more complex tasks that are currently the exclusive domain of humans. This may put more jobs at risk while hopefully opening new opportunities.
Safety is also a two-sided issue for autonomous systems. Robots can eliminate human error to make systems safer. On the other hand, robots can also make mistakes due to flawed algorithms, limited training sets or inadequate programming that leaves them unable to handle novel situations. In these cases, safety can be impacted negatively.
Privacy is assumed to be a right by many people. Robotic and autonomous systems with access to private data – such as that provided by location-tracking features in autonomous vehicles – should behave according to ethical guidelines to protect that information.
The ethical concerns noted above are not all-encompassing and only apply to basic robotic systems enabled by the limited AI technology that exists today.
More advanced autonomous systems enabled by true artificial general intelligence (AGI) will bring with them a host of new ethical concerns, not the least of which involve moral questions about the nature of machine consciousness.
For better human-made robots maybe we first need better humans!