In March 2025, the University of Caen Legal Clinic organized a unprecedented mock trial confronting students with an unprecedented legal case: a police cyborg accused of murder. This scenario raises a new question, but one that risks becoming fundamental in a society where AI and robotics are increasingly important: should cyborgs be judged as autonomous beings and therefore criminally liable in the event of accidents, or should their designers answer for their actions to the place of their machines?
Legal status: a fundamental distinction
In French law, a robot is not considered a person in the legal sense, whether physical or moral. Although the idea of an “electronic personality” has already been raised, it has never been enshrined in positive law. More concretely, a robot, even one equipped with of advanced artificial intelligence, as in Detroit Become Human, does not possess consciousness in the true sense of the term, and functions solely thanks to algorithms and databases that process information according to pre-programmed parameters.
In this fictional trial, the situation is slightly different. Since cyborgs are partially robotic human beings, they retain their status as natural persons, raising the question of their criminal liability. This fundamental distinction between robot and cyborg constitutes the starting point for any legal reflection on criminal liability. robotic entities, while highlighting the legal and ethical issues inherent in the laws of robotics.
Consciousness, Free Will, and Criminal Liability
Criminal liability, particularly in a case of murder, implies intentionality and an ability to understand good and evil. For cyborgs, the crucial question is whether the brain implant alters consciousness by impeding free will.
Unlike humans, robots operate by executing algorithms and do not make truly conscious choices. So, when the cyborg in the mock trial shot his victim because of screams perceived as a threat, was this a case of robotic bias rather than a deliberate choice? This blurred line between human decision and algorithmic execution raises new ethical questions, particularly when an individual becomes dependent on implants or integrated AI.
A legal mishmash
In the absence of direct liability on the part of the police cyborg, several actors could be implicated. The state that authorizes the deployment of these augmented law enforcement agencies, the design company in the event of programming flaws, or the center that “trains” and stores the robots... the chain of responsibility illustrates the complexity of the legal issues related to autonomous systems, as well as the importance of establishing a clear legal framework.
Three principles emerge from this fictitious reflection led by Maria Castillo, Lecturer in Public Law at the University of Caen Normandy and Amandine Cayol, HDR lecturer in private law at the University of Caen Normandy: clearly distinguish the legal status of robots and cyborgs, develop tools to assess the impact of technologies on human free will, and define a network of alternative responsibilities to guarantee the protection of citizens.
0 Comments