The EU says the measure will make it easier to figure out who’s liable when robots screw up or go rogue, but critics say it’s too early to consider robots as people and the law will let manufacturers off the liability hook.
The EU parliament said that the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making suitable any damage they may cause, and possibly applying an electronic personality to cases where robots make independent decisions or otherwise interact with third parties independently.
The parliament said the law would apply to “smart robots”, defined as robots having the capacity to learn through experience and interaction, the ability to acquire autonomy through its sensors, the capacity to adapt its behavior and actions to the environment, among other criteria. The worry is that tech such as autonomous vehicles and drones could accidentally smash into people and it is an open question who would be liable.
Electronic personhood, the EU Parliament believes, is the solution to this problem. It does not give AI human rights, such as the right to vote, the right to life, or the right to own property and it does not mean they are seen as self-conscious entities. Instead, it would be something like corporate personhood which is a legal move which gives corporations rights typically afforded to people. Electronic personhood would turn each smart robot into a singular legal entity, each of which would have to bear specific social responsibilities and obligations.
Liability would reside with the robot itself, and while we could not throw a machine in jail, it would have been insured as as independent entity. Funds for a compulsory insurance scheme could be provided by the wealth the robot accumulates over the course of its lifetime. The EU says electronic personhood is not about granting human-equivalent rights to smart robots and AI, but rather the introduction of a special legal designation that recognises them as a particular class of machines—but one requiring human backing.
The 156 experts felt the need to sign an open letter to the European Commission responsible for the proposal. The signatories of the letter, including legal expert Nathalie Nevejans from the CNRS Ethics Committee, AI and robotics professor Noel Sharkey from the Foundation for Responsible Robotics, and Raja Chatila, the former president of the IEEE Robotics and Automation Society, agree that laws are required to keep humans safe in an era of sophisticated machines. But they take exception to the claim that it’ll be impossible to prove the liability when self-learning, autonomous machines do something terrible.
“From a technical perspective, this statement offers many bias [sic] based on an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction and a few recent sensational press announcements”, write the signatories in the letter.
The authors also say it’s inappropriate to base electronic personhood on either preexisting legal or ethical precedents.
“A legal status for a robot can’t derive from the Natural Person model, since the robot would then hold human rights, such as the right to dignity, the right to its integrity, the freedom to remuneration or the right to citizenship, thus directly confronting the human rights."
This would be in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms,” the authors claim.
“The legal status for a robot can’t derive from the Legal Entity model [either], since it implies the existence of human persons behind the legal person to represent and direct it. And this is not the case for a robot. Your cat makes autonomous decisions, too, but we do not hold the cat legally responsible for its actions.”