How can a robot sin a crime?

The development of the technology field is changing with each passing day. Artificial intelligence (hereinafter referred to as AI) is a machine with intelligent behavioral mimicry. It is a smart brain that simulates human behavior and cognitive programs in the computer and naturally learns all the knowledge. It is increasingly replacing human activities, but it also poses danger to human beings. To this end, the modernity problem that AI generates is to integrate these more intelligent AI entities into other legal social control systems. Come in?

In addition to robbing the rice bowl, the robot can still commit a crime. Can it be sentenced?

In the 1950s, the American sci-fi scientist Asimov set three famous principles of robots in order to prevent the possible threat of AI:

Article 1: Robots cannot harm humans. And it cannot be jeopardized by human security because it ignores this danger.

Article 2: Robots must obey the commands issued by humans. However, if the order issued violates the first article, it is not limited to this.

Article 3: Robots must protect themselves as long as they do not violate the first two provisions.

In addition to robbing the rice bowl, the robot can still commit a crime. Can it be sentenced?

However, this obviously does not eliminate human fear of AI. In order to prevent future AI from infringing on human legal interests, many scholars at home and abroad put forward the idea that "artificial intelligence independently bears criminal responsibility" (hereinafter referred to as "independent responsibility commitment"). This view holds that AI's algorithm may have many traits, and these qualities are far more than an ordinary person. But these qualities are not necessary to impose criminal responsibility. When a person or company meets both external (hazardous behavior) and internal (free will) elements, it can be held criminally liable. If the AI ​​can also satisfy these elements and is in fact satisfied, then there is no doubt that the AI's criminal responsibility should be pursued. The core basis of the “Independent Responsibility Theory” is that the AI ​​conforms to the internal elements of criminal responsibility, namely the independent “control ability and cultivable recognition ability”. The satisfaction of external factors does not become a hindrance to the "independent responsibility commitment theory". They believe that with the development of technology, it is not surprising and unreasonable to break the traditional theory from "harmful behavior". Any action can be considered an AI's behavior as long as the AI ​​can mechanically control its limb movements.

On this basis, the “Independent Responsibility Commitment” sets targeted penalties for AI. This includes penalizing AI for arbitrage; liberty penalty for AI; and death penalty for AI, including permanent destruction of the body and deletion of AI data.

In addition to robbing the rice bowl, the robot can still commit a crime. Can it be sentenced?


However, the author believes that artificial intelligence alone assumes criminal responsibility and does not have theoretical self-consistency. The reasons are:

First, the behavior of AI is closely related to the meaning of freedom. Behavior in criminal law is understood as behavior that is controlled by meaning, so the behavior of AI seems to be acting on its own. Indeed, AI will act through the limbs, voice through the system, influence the surrounding environment through the central control system, etc., but it cannot be said with certainty that these behaviors are based on free control. The "meaning" in this is more likely to mean the person who programmed the AI ​​or the person who uses the AI. In this case, the AI's behavior will be attributed to the person behind the machine, not the AI ​​itself. To recognize that the "behavior" of AI satisfies the elements of criminal law behavior, it must find the meaning in the AI ​​itself that is completely equivalent to human meaning.

Second, AI does not have the free meaning of evaluation equivalent to human beings. The concept of free meaning is a product of blame designed to achieve a certain social purpose. However, responsibility is not arbitrarily blamed for unrestricted. For those who are self-determined by themselves and their past self-determination, those who cannot be evaluated through the ethical evaluation benchmark system, that is, those who do not have good and evil judgments, cannot communicate ethically because he cannot respond to ethical grievances. That is, it does not have the self-reflection ability as a necessary condition for responsibility. Therefore, it is too early to make AI the same evaluation as the self-control of human ethics.

Moreover, even if AI has the same ethical control system as humans, it does not necessarily have the meaning of freedom. The German Federal Supreme Court has made the following classic statement on criminal responsibility: "The internal basis of responsibility is the maturity of human morality. At the same time, as long as the ability to freely and ethically self-determination is not paralyzed for a short time due to pathology, or If you are hindered for a long time, then you have the ability to be free, blame, and morally self-determined. Therefore, you can decide to follow legal behavior, resist wrongdoing, and make your attitude conform to the legal norms of law, avoiding the law. the behavior of."

So what is moral maturity? Moral maturity requires social identity. When we recognize the responsibility of criminal law, expectations and blame in realistic social relations are considered to be very important. In the future, with the evolution of AI technology, there is the possibility of giving people the impression that AI is completely free to make decisions. However, it does not matter whether the AI ​​actually acts by free means. Even in the meaning of human freedom, we don't actually know whether we are really in the meaning of freedom. We just judge whether a person has freedom through the evaluation of the surrounding third parties. Therefore, for the robot, from the perspective of the third party, how to evaluate is the most important issue. Therefore, before AI reaches a state that can be accepted by human society, communicated equally, and is evaluated as completely indistinguishable from human beings, although AI has the ability to "control and recognize", it cannot be evaluated as free by humans.

Third, it is not feasible to impose penalties on AI. First, with regard to the fine penalty for AI, some scholars have suggested that fines imposed on AI can be finally achieved by forcing AI manufacturers and users to fulfill legal obligations such as purchasing insurance. This is actually to pass the punishment of AI to the manufacturer and user, which violates the principle of responsibility of the criminal law. Secondly, regarding the liberty of AI, the liberty of AI will not be as effective as human beings. Although human beings can understand the meaning of freedom, AI itself cannot understand the meaning of this punishment. Finally, regarding the “death penalty” for AI, if AI is regarded as the same subject as human beings, it is against humanitarian to impose “death penalty” on AI. We respect the right to life and advocate the abolition of the death penalty. We cannot impose the death penalty on AI that has the same status as human beings.

"The future has come, but it is not coming." The legal review of the AI ​​era should be based on the present, and the criminal law countermeasures against the possible threats of AI should be rooted in the basic theory of criminal law. Of course, while the AI ​​era brings us benefits, we need to pay attention to the potential dangers that may arise. To prevent this kind of technical risk, we should take a far-sighted view from an ethical point of view, establish strict AI R&D, production technical ethics rules and legal standards as early as possible, and guarantee the human controllability of AI technology and products, perhaps the more urgently needed era proposition.

AC Power Meter

The AC power meters are applicable to the measurement of single-phase or three-phase
voltage and current parameters of distribution system. The transformation ratio of this series of meters is programmable. They supports digital input, relay output, analog output and communication functions. They provide a variety of different installation sizes and can directly replace the analog pointer ammeter.

Ac Power Meter,Led 3 Phase Current Panel Meter,Ac Digital Electric Power Meter,Ac Multifunction Meter

Jiangsu Sfere Electric Co., Ltd , https://www.elecnova-global.com

Posted on