Among the major changes humanity faces in the 21st century, artificial intelligence (AI) is often cited as one of the foremost. Recently, there has been much discussion regarding the legal responsibility associated with AI. Although AI is merely a tool created by humans as software, the issue of responsibility has resurfaced because the operation of AI through deep learning involves a black box area that humans cannot control. In other words, since humans cannot know or control why AI makes certain decisions or operates in a particular way, the core controversy lies in whether it is unfair to hold those who create or utilize AI responsible. While contemplating desirable solutions to this problem, I thought that clues might be found in the heated debate on free will in the fields of philosophy and psychology.
The free will debate concerns whether humans possess something that can be called free will, and it has been contested for over 2,500 years between proponents and opponents. From a common-sense perspective, it seems natural that humans make choices by their own will and bear ethical and legal responsibility for those choices. However, many philosophers, such as Spinoza, do not think so, and scientists like Einstein and Stephen Hawking have also denied free will. Furthermore, modern psychology and neuroscience have presented empirical evidence through precise experiments that free will does not exist, making the denial of free will the mainstream view in academia.
In 1983, Benjamin Libet conducted the so-called ‘Libet experiment,’ which showed that about 0.5 seconds before a human makes a decision, the brain’s readiness potential area is already activated, and decisions and actions follow this signal afterward. He argued that what appears to be free human action is actually predetermined by the unconscious, and conscious decisions and actions occur belatedly under the guidance of this unconscious, causing a great shock. As controversy over this experiment intensified, John Dylan Haynes’s 2007 experiment revealed readiness potential activation as much as 10 seconds prior, once again proving Libet’s denial of free will. If this claim is consistently upheld, it leads to the dangerous idea that humans are merely deterministic tools governed by the unconscious and thus need not bear ethical or legal responsibility. Indeed, some, like the atheist philosopher Sam Harris, advocate this view.
However, I do not agree with these claims. First, in the free will debate, discussions have proceeded without unified definitions of key concepts such as free will, determination, and influence, resulting in many meaningless arguments as each debater uses their own concepts ranging from ‘cosmic determinism’ to ‘Buridan’s ass.’ Regarding psychological and neuroscientific claims, it is difficult to accept that the essence of humans as responsible agents is the brain?specifically the prefrontal cortex responsible for judgment and decision-making?while the limbic system, which governs emotions and the unconscious, is not part of the human essence or self. Our emotions and unconscious are indeed influenced by uncontrollable genetic and environmental factors, but most are formed by our own choices and habits, and we must bear full personal responsibility for actions resulting from them. Viktor Frankl stated that there is a small space of choice between stimulus and response, and the renowned neuroscientist Lisa Feldman Barrett also said that since we can choose what to expose ourselves to, it is reasonable to call that free will and hold responsibility.
I believe the issue of AI responsibility should also be resolved using this logic. In other words, in the case of ‘weak AI’ without self-awareness, even if there is a black box part (analogous to the limbic system governing the unconscious in the brain) beyond human control, the creator inevitably influences the decision-making mechanism within the black box in some way. Therefore, the program should still be regarded as one with its creator (one brain), making it reasonable for the creator to bear responsibility. If a business operator uses that AI in business and causes harm, the creator and the business operator should bear joint responsibility and resolve the matter through mutual indemnification. Considering the black box nature of AI, it is desirable to structure responsibility as strict liability, similar to the Automobile Damage Compensation Guarantee Act. However, if this could hinder AI development, it is necessary to respond by dispersing or transferring risk through insurance or derivative products. Finally, if asked what to do in the case of ‘strong AI’ with self-awareness, I would answer by questioning whether humanity would still be alive at that point.
Seong Hee-hwal, Professor, Inha University School of Law
© The Asia Business Daily(www.asiae.co.kr). All rights reserved.
![[Opinion] The Debate on Free Will and the Legal Responsibility of Artificial Intelligence](https://cphoto.asiae.co.kr/listimglink/1/2021020115515356195_1612162313.jpg)

