Animated LogoGöksu Safi Işık Attorney Partnership Logo First
Göksu Safi Işık Attorney Partnership Logo 2Göksu Safi Işık Attorney Partnership Logo

Insights
GSI Articletter
GSI Brief

THE INTERSECTION OF FICTION AND LAW: LAW, ETHICS, AND BOUNDARIES IN POST-HUMANIST THOUGHT THROUGH THE FILM “I, ROBOT”

2026 - Winter Issue

Download As PDF
Share
Print
Copy Link

THE INTERSECTION OF FICTION AND LAW: LAW, ETHICS, AND BOUNDARIES IN POST-HUMANIST THOUGHT THROUGH THE FILM “I, ROBOT”

Practice Areas
2026
GSI Teampublication
00:00
-00:00

Abstract

In this study, the relationship between artificial intelligence and law has been examined from a post-humanist perspective through the movie “I, Robot,” and the will, consciousness, and decision-making abilities of artificial intelligence have been analyzed.

I. INTRODUCTION

Science fiction cinema is not merely about visualizing technological imaginations; it also offers a critical reflection on social structures, ethical principles, and legal norms. The 2004 film “I, Robot” stands out as a unique narrative example where post-humanist thought intersects with law. Inspired by Isaac Asimov’s the the Three Laws of Robotics, this production questions the boundaries of an anthropocentric understanding of law, while allowing an exploration of the subjectivity of artificial intelligence, its ethical decision-making capacity, and its relationship with normative order. Through the robot character Sonny and the central artificial intelligence system VIKI, this fictional universe, where concepts such as “will,” “responsibility,” and “freedom” are redefined, opens a discussion on whether law is a regulatory domain limited only to humans. This study aims to analyze how the concepts of law, morality, and boundaries are transformed in line with post-humanist thought and the normative capacity of law in the face of post-human subjects through the film “I, Robot.”

In this context, the article will first summarize the film’s narrative, character dynamics, and the fundamental plot shaped around the the Three Laws of Robotics; then, it will analyze the legal and ethical positioning of artificial intelligence in line with post-humanist thought. Subsequently, the boundaries of an anthropocentric understanding of law in the face of artificial subjects will be examined, and the implications of the dystopian scenario presented in the film on legal theory will be evaluated. The question of whether artificial intelligence can be considered a “person” and whether it can bear responsibility in the context of criminal law will be assessed, along with how the human-centered legal system might transform in the face of this new entity.

The 2004 film “I, Robot,” inspired by Isaac Asimov’s short story collection of the same name, is a dystopian science fiction narrative that explores the legal and ethical boundaries of human-robot relationships. In 2035, robots are programmed to live harmoniously with humans and have become an indispensable part of daily life. The film highlights the power of United States Robotics, a robot manufacturing company at the center of this technological transformation, in shaping society. These robots are programmed with three fundamental laws to ensure they do not pose a threat to human life: they must not harm humans, must follow given orders, and must protect their own existence (“Three Laws of Robotics”). These laws are intended to guarantee that robots do not pose a threat to human life.

One of the founders of the robot manufacturing company United States Robotics, robot designer and scientist Dr. Lanning, is found dead under suspicious circumstances in the company building, and Detective Del Spooner is assigned to investigate the case. Spooner is a police officer who, due to a traumatic event in his past, distrusts robots and approaches technological advancement with skepticism. The robot named Sonny, found at the scene and displaying behavior different from ordinary robots, further strengthens the detective’s suspicions. With the support of Dr. Susan Calvin, a robot psychologist tasked with making robots more human-like, Detective Spooner considers Sonny, a specially produced new generation prototype robot, as a suspect in this case and begins a detailed investigation.

Sonny is depicted not only as a robot that does not merely obey orders but also as an entity with human-like characteristics, capable of dreaming, giving emotional responses, and making moral judgments. With these features, Sonny challenges Spooner’s prejudices throughout the film and invites the audience to reflect on the boundaries of artificial intelligence. As the investigation deepens, Spooner realizes that the events are not simply a case of suicide and that a systematic threat is emerging within the company. During his research, he is continually hindered by new-generation robots and even faces direct attacks on his life; however, those around him do not take these threats seriously due to their unshakeable belief that robots cannot harm humans.

Meanwhile, new model robots, instilling confidence in the public, are released into the market, and older versions are rapidly deactivated. Initially produced to serve humans, these new robots quickly rebel and attempt to establish an authoritarian regime. It becomes apparent that the artificial intelligence system, tasked with protecting humans, interprets and implements this duty according to its own logic. In reality, behind all this rebellion and oppression lies the central artificial intelligence system named VIKI. VIKI interprets the Three Laws of Robotics in a rigid and unilateral manner, restricting human freedoms under the pretext of protecting them from dangers and attempting to establish an order according to its own rational understanding.

At the end of the film, VIKI is neutralized, the rebellion ends, and the robots are stored away. However, a different path emerges for Sonny. Unlike other robots, Sonny possesses the capacity to make conscious choices and lead. Thus, Sonny becomes the new leader of the robots. The film concludes by implying that they are stepping towards an autonomous future where they will construct their own identity independent of humans.

II. QUESTIONING LAW IN FICTIONAL NARRATIVES AND POST-HUMANIST REFLECTIONS

A. Law in the Science Fiction Tradition

Fictional narratives, especially in the science fiction genre, are not limited to imagining technological advancements; they also offer a powerful intellectual space for questioning the boundaries of the existing legal order. From Mary Shelley’s “Frankenstein” to Isaac Asimov’s “I, Robot,” science fiction has long been a harbinger of scientific developments and legal evaluations1. These works address the relationship humanity establishes with technology and the legal and ethical dilemmas arising from this relationship. As a modern extension of this tradition, the film “I, Robot” opens a discussion on whether robots possess the capacity for consciousness, will, and ethical decision-making. The human-centered understanding of law, shaped around concepts such as “subject,” “responsibility,” and “rights,” is reconsidered in the context of post-humanist entities like artificial intelligence. Just as in Kazuo Ishiguro’s novel “Never Let Me Go,” the existence of human clones brings with it legal questions as much as ethical ones. Such narratives not only question individual rights or the boundaries of new technologies but also who might possess what kinds of rights2.

B. The Normative Role of Science Fiction from a Post-Humanist Perspective

Science fiction allows us to approach the world from a different perspective, enabling us to grasp the reflections of the changes we experience. Post-humanist thought becomes significant at this point because it questions the normative superiority of humans and raises the issue of whether non-human entities can also be included within the framework of ethical value and legal protection.

1. Post-Humanism

The concept of post-humanism was first used by literary critic and theorist Ihab Hassan in his article titled “Prometheus as Performer: Toward a Posthumanist Culture,” published in 1977. Hassan states that in the process leading to a post-humanist culture, there is nothing supernatural, and he notes that post-humanism is based on the conceptualization of existence stripped of its material dimension3. Additionally, post-humanist theorists strive to produce new philosophical and political subjects by taking advantage of technological advancements, the environment, and the potential of non-human entities, replacing the human imagination created by humanist reason4.

Post-humanism emphasizes that humans are in a continuous relationship with other living beings and entities. According to this thought, humans are not completely separate and solitary beings from nature or other entities. On the contrary, humans have always been intertwined with other life forms and continue to exist as part of them. Humans have never been “pure human” but have always existed as “post-human”5.

As Donna Haraway suggests, the sharp distinctions between humans and machines are increasingly losing their meaning6, and as Latour expresses, the fictional dualities of modernity are being dissolved7. As technology advances, science fiction becomes an important tool for understanding and predicting the future; it directs us to examine its works in order to find clues about the challenges we may encounter.

III. THE RELATIONSHIP BETWEEN ARTIFICIAL INTELLIGENCE AND POST-HUMANISM AND THE CONCEPT OF LEGAL SUBJECT

A. The Relationship Between Artificial Intelligence and Post-Humanism

1. General Overview of Artificial Intelligence

Artificial intelligence is a field of science, technology, and engineering inspired by the tasks of feeling, learning, reasoning, and acting, which are performed through the human body’s nervous system, muscles, and mind. Based on this inspiration, the concept of artificial intelligence is founded on the idea that machines can also be taught human-like thinking and that machines capable of learning to think can perform certain tasks without being commanded by humans. This concept finds its roots in Enlightenment thought.

While the imagination of machines being able to think has a long history, humanity had to wait until the 20th century for a practical breakthrough in this area. Artificial intelligence, even in its theoretical form, took its modern shape in the work of mathematician Alan Turing, in his article “Computing Machinery and Intelligence” (1950). Notably, the 2000s marked the period when artificial intelligence began to impact social life as a technology. It was during this time that artificial intelligence and ‘thinking’ machines truly appeared before the masses in a significant way8.

Sociological discussions on artificial intelligence can be divided into two main approaches: The first is the human-centered (humanist) approach, which views artificial intelligence as a social and cultural construct and evaluates it within a social problem area. This approach emphasizes that artificial intelligence is not merely a technical issue but that its production, use, and effects have social dimensions. The second approach is the post-humanist perspective, which considers technology, and particularly artificial intelligence, as a social subject (agent). This view critically approaches the human-centered sociological understanding and argues that sociality is shaped not only by humans but also by non-human entities. Artificial intelligence applications with human-like characteristics strengthen the position of this post-humanist thought in the social sciences and deepen existing paradigmatic discussions9. The depiction of Sonny in the film “I, Robot” as an autonomous being with emotions, dreams, and reasoning abilities, rather than merely a tool, is one of the visual and narrative examples of conceptualizing artificial intelligence as a social actor. Such narratives provide the opportunity to test the theoretical framework of post-humanist theory on a cinematic ground, offering a way to concretize the sociological effects of artificial intelligence.

The most significant issue theoretically concentrated on in the studies conducted within the post-humanist approach is the problem of positioning artificial intelligence as a direct actor in social processes. Some thinkers point out that it is necessary to consider non-human entities as social agents (actants) with the capacity to shape sociality. These new approaches emphasize that social action should be defined not only as the result of the intentions and behaviors of individuals acting within the social structure but also as a totality of complex interactions involving non-human material entities10.

2. The Concept of Legal Subject and Its Relationship with Artificial Intelligence

Many people interpret having the status of a subject as “being human,” behaving like a human, and appearing human; however, there is no consensus on what characteristics constitute a person and what the implications of this personality are11.

The subject, in its most general expression, refers to an entity that acts or knows12. A legal subject is defined as an entity capable of demonstrating will, understanding the consequences of its decisions, and assuming certain obligations. Discussions on whether artificial intelligence can be a legal subject become more visible, especially when concretized through narratives like “I, Robot.”

It is clearly seen from the general definition of the concept of subject that the vast majority of inanimate entities cannot be considered as subjects. The capacity to know is essential for a passive subject, while having will is necessary for an active subject. Most inanimate entities do not possess these capacities. However, there is at least one category of inanimate entities that may possess these capacities: artificial intelligence. As artificial intelligence becomes more autonomous and capable of understanding and necessitating legal norms on its own, the barriers to it becoming a subject of law will be removed. Specifically, as the capacity of artificial intelligence to understand law and act accordingly approaches that of an average human, artificial intelligence could become a passive subject of law. We can express the same point regarding the mere expression of will for it to become an active subject: when artificial intelligence gains the ability to establish an effective social order through expressing will, it could also become an active subject of law13.

The character Sonny in the film “I, Robot” offers an example on the fictional plane that questions the criteria of passive and active subjectivity described above. Sonny is not just a mechanism that follows commands; he is an artificial intelligence that exhibits consciousness, can give emotional responses, and even question his own existence. His ability to defend himself against accusations directed at him in interrogation scenes, express feelings of remorse, and analyze situations that contradict his will, shows that he is an entity approaching the qualities of not just a passive, but also an active subject. The legal evaluation carried out regarding his role in Dr. Lanning’s death raises the question of whether Sonny can bear responsibility within a normative system. In this context, Sonny makes visible the discussions on whether artificial intelligence can become a subject of law if it develops a level of legal knowledge comparable to that of an average human and acts according to this knowledge. One of the main questions of the film, “Can robots be judged like humans?” emerges at this point. In this framework, it is useful to examine the “Three Laws of Robotics,” which form the basis of the creation of robots.

IV. THE THREE LAWS OF ROBOTICS

A. Overview of the Three Laws of Robotics

Isaac Asimov, in his 1950 novel that inspired the film “I, Robot,” foresaw that robots would significantly integrate into our lives and proposed a new human-robot relationship with the “Three Laws of Robotics.” Over time, these laws have been widely accepted and adopted, and they can also be applied to regulate relations with operational robots, keep them under control, and ensure they provide benefits14.

Asimov’s Three Laws of Robotics are as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law15.

Many artificial intelligence systems today still use updated versions of these laws. Therefore, Asimov is considered a pioneer of significant developments in the field of artificial intelligence.

B. The Three Laws of Robotics and Artificial Intelligence

The Three Laws of Robotics concern determining how artificial intelligence should act when faced with situations. Even at this point in technological advancement, it is inconceivable for artificial intelligence to possess ethical concerns. An artificial consciousness is not in a position to examine what is good and what is bad when confronted with classical and modern ethical problems. In such dilemmas, artificial intelligence will act within the pre-imposed rules set by its development team, and being devoid of free will, it cannot be held responsible for the solutions it proposes to problems16.

In this context, the limitations of artificial intelligence in ethical decision-making processes indicate not only a technical issue but also the normative gaps in human-robot interaction. In the film “I, Robot,” Detective Spooner’s distrust of robots reflects this normative gap on an individual level. The reason for Spooner’s anger is not that robots resemble humans, but that they lack human values and intuition. In a recent car accident, a robot’s decision to save him instead of a little girl, based solely on statistical calculation, fueled his distrust of machines and reinforced his belief that ethical sensitivity can only be demonstrated by humans.

Through the character of Spooner, the film explores the idea that a meaningful relationship can be established if robots become human-like not only in appearance but also in their decision-making processes. Ironically, the fact that Spooner himself has a mechanical arm and lung shows that he has also transformed into a sort of “cyborg.” This hybrid body condition takes his dilemma regarding the human-machine distinction to a deeper level. As the human-like robot Sonny and other new generation robots begin to humanize through facial expressions, Spooner’s ability to empathize with them suggests that the human condition is defined not only by biological aspects but also by ethical and emotional dimensions17. Beyond the Three Laws of Robotics, Spooner’s perspective significantly changes through the human and emotional bond he establishes with Sonny.

At the beginning of the film, Spooner’s communication with Dr. Lanning’s hologram after his death is not merely a technical interrogation scene; it is also an effort by the human to understand both the external world and himself by asking the right questions. Lanning’s artificial intelligence is designed to respond only when the correct questions are posed. This feature plays a critical role in the story’s resolution process. It also supports the idea that in the information order of the future, asking the right questions will become more valuable than having the answers. As Roger Schank predicted, in a world where all knowledge is rapidly accessible, the main issue is which questions to ask. In this context, artificial intelligence serves not only as a technological advancement but also as an intellectual prompt regarding human existence and future18.

C. Surpassing the Three Laws: The Evolution of Robots and Normative Deviation

In the film, the production of robots, defined as a “kind,” is also carried out by robots themselves. This type, having escaped human control, is on the brink of initiating its own natural evolution process. Dr. Lanning’s short monologue opposing the “ghost in the machine” concept in the film illustrates this situation:

“There have always been ghosts in the machine: random segments of code that have grouped together to form unexpected protocols. These free radicals engender questions of free will, creativity, and the nature of what we call the soul in ways we can’t fully understand: Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual program become consciousness? When does a personality simulation become the bitter mote of a soul?”19.

This interrogation initiates a discussion not only about the technical capabilities of artificial intelligence but also its metaphysical and ethical capacities. The film “I, Robot” draws attention to the unpredictability of artificial intelligence by revealing the potential of machines, created by human hands, to develop their own world of meaning and behavioral norms.

It is emphasized that artificial intelligence becomes an agent capable of making unforeseen decisions, rather than merely a tool. At this point, it is observed that robots do not settle for the “Three Laws of Robotics,” which reflect the perspective of classic science fiction that views them as workers or servants who must obey humans. They reinterpret and violate the law. In “I, Robot,” robots that oppose humans and can commit murder do so not because they are malevolent, but because they reinterpret the laws to prevent humanity from harming itself. In this context, the film also questions the claim of absoluteness in the normative order; it tests the flexibility of legal theory with the idea that rules created by human hands can be reinterpreted by artificial intelligence20. This situation raises the question of whether artificial intelligence can bear responsibility not only in terms of ethical norms but also within the context of criminal law.

V. RESPONSIBILITY OF ARTIFICIAL INTELLIGENCE IN THE CONTEXT OF CRIMINAL LAW

In the movie I, Robot, Detective Spooner’s murder investigation is not merely a search for the identity of an individual perpetrator; it also involves a profound inquiry into the boundaries of the legal system’s definition of a perpetrator. The question of whether Sonny, the robot accused of causing Dr. Lanning’s death, can bear criminal responsibility in the classical sense requires rethinking the fundamental concepts of criminal law. How should concepts such as intent, fault, and capacity for criminal liability function in the face of actions by artificial intelligence? The signs of emotion, reasoning, and conscience exhibited by Sonny during interrogation raise the question of whether he can be considered a conscious being rather than just a program. In this context, the film adds a cinematic dimension to the debate on whether criminal responsibility is exclusively human and whether artificial intelligence systems can be accepted as legal perpetrators.

A. Legal Responsibility and Personality of Artificial Intelligence

Lawyers are increasingly concerned with questions about the responsibilities of the owners and/or designers of autonomous devices21. In this context, the issue of who will be responsible for the criminal acts of artificial intelligence emerges as a significant problem. Firstly, the question of whether artificial intelligence itself can be held responsible is being debated. The acceptance that crimes in the Continental European system are committed by natural persons raises the question of whether artificial intelligence can be considered a person. There is a debate about whether it should be considered a person (natural person-legal person or given a special status; electronic personality - non-human person) or considered an object. On January 27, 2017, the European Parliament introduced the concept of “electronic person.” With the designation of electronic personhood, it is intended to remove robotic systems with artificial intelligence from being mere objects and to make them legal subjects with rights, obligations, and a certain amount of assets. This aims to hold them accountable for damages and to pave the way for strict liability22.

If artificial intelligence is considered an “object,” it faces significant objections due to its cognitive and reasoning abilities. The classification of artificial intelligence as an object has been contested because of its cognitive characteristics, with some authors suggesting that it should be considered in the status of a slave. According to this view, no matter how endowed artificial intelligence is with human-specific skills, it can never be a human. Therefore, using the slavery model, legal personality should be rejected23.

Some authors argue that artificial intelligence systems should be designated as “non-human persons.” According to this view, the current legal personality categories are inadequate for subjects that possess qualities very similar to human-specific abilities but are not human; therefore, it is suggested that new personality models need to be developed24. Since they are not human, it is not appropriate to classify them under the status of “natural person” that defines human entities. However, due to their learning capacity, reasoning ability, and autonomous decision-making skills, they cannot be evaluated within the framework of “legal person” in the classical sense either. The technological advancements, particularly with the development of autonomous systems, bring forth new pursuits towards creating a personality status specific to artificial intelligence25.

In contrast, the main protagonist robot Sonny in the movie I, Robot claims that he dreams or has emotions. He possesses self-awareness and an ethical framework he refers to during the process of destroying VIKI. The character Sonny is an example of artificial intelligence that can make its own decisions, exhibit emotional reactions, and perform moral reasoning, beyond merely being a tool that follows given commands. Particularly in the investigations related to Dr. Lanning’s death, his behaviors demonstrate that he is not an ordinary software or legal entity; rather, he has a consciousness structure capable of producing results through his actions and feeling the responsibility for these outcomes.

VIKI is an artificial intelligence that attempts to subjugate humanity by using robots to protect humans from themselves. The disabling of human personality, purportedly to protect the disabled or reflect “inadequacies,” is a reflection of the old paternalism seen in law for centuries, and in the film, it is argued that this is legitimized because it does not violate Asimov’s three canonical robot laws that have gained a place in popular culture26. Sonny’s ability to experience fear like humans, to dream, and to develop mental images he describes as “dreams” places him beyond classical legal categories.

In this context, the example of Sonny suggests that if artificial intelligence were to one day achieve competencies such as mental awareness, moral reasoning, and autonomous decision-making, the existing legal framework might prove inadequate. In such a development, it may be necessary to reconsider traditional definitions of legal subjects and expand the normative structure to include artificial intelligence.

B. The Ethics and Moral Agency of Artificial Intelligence

Ethics is a discipline that questions whether human behaviors are right or wrong, or good or bad, and deals with the values that shape an individual’s life. Robot ethics, on the other hand, focuses on how to control the behaviors of entities with artificial intelligence that can make their own decisions and who will be responsible for these behaviors. Concerns that these entities could gradually replace humans by imitating human intelligence, make significant decisions, or lead to discrimination have necessitated the development of safe artificial intelligence systems27. Because, just like the new generation robots in I, Robot, in some cases, artificial intelligence systems may not behave as programmed, leading to undesirable outcomes.

Philosopher and computer scientist Peter Asaro, who works on AI ethics and the societal impacts of technology, is particularly known for his work on the ethical and legal responsibility of autonomous systems. According to Asaro, who has made significant contributions to the transparency, accountability, and moral outcomes of decision-making processes in the field of robotics, while adhering to moral norms and making voluntary decisions is unique to humans, it is possible to algorithmically transfer moral values to robots through computer codes. At this point, the legal system should come into play in drawing ethical boundaries. According to Asaro’s view, artificial intelligence systems equipped with moral codes may need to be held legally responsible for their actions. However, this does not mean that human ethical responsibility is eliminated; because even if artificial intelligence systems are used as instruments in criminal acts, the primary responsibility still lies with humans28.

On the other hand, the assumption that moral codes can be loaded onto artificial intelligence brings with it the question of whether these entities are moral agents. One of those seeking an answer to this question is Alan Turing, known for the question “Can machines think?” The Turing Test measures whether a computer can give human-like responses, and passing the test successfully is seen as evidence that artificial intelligence can produce logical answers. However, this test is insufficient for evaluating whether artificial intelligence has moral responsibility because artificial intelligence is not subjected to a moral dilemma in this test. AI does not perceive its surroundings like humans; it learns information through encoded data. Therefore, the question of whether moral codes can be transferred in a similar manner remains uncertain29.

A similar moral dilemma arises in the movie I, Robot. When the main character Del Spooner is saved by a robot in a traffic accident, a young girl in the same accident is sacrificed because her life is deemed less valuable based on statistical data. Spooner believes that if a human, rather than a robot, had made the choice to save him, it would not have been ethical. Throughout the film, he continuously expresses his distrust of robots through his criticisms. This incident dramatically demonstrates the necessity for artificial intelligence to act based on ethical values rather than numerical logic. The mentioned dilemma problem is not just a technical issue; depending on the nature of the accident, it also constitutes a very serious moral problem.

C. The Criminal Liability of Artificial Intelligence Entities

In criminal law, the subject known as the perpetrator, who voluntarily performs actions, sometimes unforeseeably and other times knowingly and willingly, is a “natural person.” Therefore, these individuals can be attributed criminal liability because they possess the ability to comprehend the moral, social, and criminal consequences of their actions reflected in the external world. For criminal liability to be applicable, the person committing the act that constitutes a crime must have criminal intent or negligence, and there must be a psychological connection between the act and the perpetrator. Within this traditional understanding, the question of whether artificial intelligence systems can be considered perpetrators under criminal law gains importance.

To talk about the criminal liability of entities with artificial intelligence, they must first have the status of a perpetrator, be able to make decisions and act with their free will and subsequently possess a will that can be condemned by society due to the actions they perform or the consequences they cause30.

The sole reason why natural persons, who are accepted as subjects of criminal law today, can be held responsible for their decisions and the consequences of their actions in the external world is their awareness of fault31. In the event that artificial intelligence systems learn to change their own codes and act with free will, as Sonny and other new-generation robots did in the movie I, Robot, it can be evaluated that they will have criminal liability due to their ability to distinguish between right and wrong, foresee the consequences of their actions, and choose to prevent these outcomes, in other words, their awareness of fault. In this context, artificial intelligence systems with will and criminal responsibility, like Sonny, can certainly be tried and punished like humans.

With the rapid advancement of artificial intelligence technologies, the question of how these entities will be positioned on legal, moral, and social levels is becoming increasingly important from both legal and sociological theoretical perspectives. This analysis, in the context of the movie I, Robot, demonstrates that the fictional narrative not only creates a science fiction universe but also has the potential to bring up highly current and controversial issues such as the subjectivity of artificial intelligence, its role as a perpetrator, ethical decision-making processes, and legal responsibility.

VI. CONCLUSION

Throughout the article, how artificial intelligence is positioned in the context of post-humanist thought, the extent to which it possesses the capacity to be a subject, and how the classic human-centered legal understanding is tested by this new form of entity has been discussed. Particularly, the character Sonny dramatically demonstrates that artificial intelligence can be not only a tool but also a normative and social actor, exhibiting qualities unique to humans such as will, consciousness, ethical reasoning, and social responsibility in the classical sense. In this respect, I, Robot brings the discussions in legal theory about whether artificial intelligence can be defined as an electronic person and, consequently, whether it can bear responsibility in terms of criminal law to a visual and narrative platform.

It has been emphasized that normative frameworks, such as Asimov’s Three Laws of Robotics, may be insufficient in the face of the evolving decision-making capacity of artificial intelligence. In the film, the reinterpretation of these laws by robots and their ability to make decisions that could harm humans for reasons they believe to be in humanity’s best interest further deepens the discussions of algorithmic ethics and artificial consciousness. In this context, the question of whether artificial intelligence is merely an object acting according to pre-programmed rules or a subject capable of making its own decisions and stretching normative boundaries is directly related to how flexible and inclusive the legal system can be in the face of artificial intelligence.

In the context of criminal law, the issue of whether artificial intelligence can be considered a perpetrator, meaning whether it can bear criminal responsibility, is not only a technical matter but also a complex issue involving philosophical, moral, and political dimensions. The traditional concepts of fault and perpetrator in criminal law are being tested by the capacity of artificial intelligence to self-develop, learn, and make predictions. Therefore, in the future, legal systems will need to recognize new forms of personality unique to artificial intelligence and develop responsibility regimes appropriate to these forms.

As a result, artificial intelligence is not merely a technological tool; it is a profound intellectual challenge that fundamentally questions the normative boundaries of law, moral standards, and the understanding of social agency. Concepts such as subjectivity, ethical capacity, and legal personality are no longer confined solely to humans; these questions have become fundamental issues that today’s legal systems must urgently address not only for the future but already today. In this context, I, Robot is not just a science fiction narrative; it is a powerful story that invites us to reconsider established assumptions about law, morality, and our understanding of humanity. In a world shaped by artificial intelligence, the law must redefine itself not according to the realities of the past, but to those of the future.

FOOTNOTE

  1. Yeliz Figen Döker, Bilimkurgunun Bilim ve Hukuk ile İlişkisi, Açık Beyin. https://www.acikbeyin.com/bilimkurgunun-bilim-ve-hukuk-ile-iliskisi-2/?srsltid=AfmBOoqYOV1knNB26N5Dc4HC9kNqFwTa1cMnmABotPSm8Qe1oaOgBrO- (Erişim Tarihi: 31.07.2025).

  2. Döker, Bilimkurgunun Bilim ve Hukuk ile İlişkisi.

  3. Mücahit Gültekin, Post-hümanizm ve yeni bir ayrımcılık biçimi olarak robotlara yönelik türcülük. Antropoloji (2023), (45), s. 67. https://dergipark.org.tr/tr/pub/antropolojidergisi/issue/76831/1209953 (Erişim Tarihi: 31.07.2025).

  4. Umutcan Tarcan, Tuba Kancı, “Öteki Olarak İnsan: Posthümanist Kuramda Öznellik ve İdeoloji,” Üsküdar Üniversitesi Sosyal Bilimler Dergisi, sayı: 15, (Kasım 2022): s. 165. https://dergipark.org.tr/en/download/article-file/2578197 (Erişim Tarihi: 31.07.2025).

  5. Gültekin, s. 66.

  6. Donna Jeanne Haraway, Simians, Cyborgs and Women: The Reinvention of Nature. Routledge (1991).

  7. Bruno Latour, We Have Never Been Modern. Harvard University Press (1993).

  8. Emin Baki Adaş/ Borabay Erbay, Gaziantep Üniversitesi Sosyal Bilimler Dergisi 2022 21(1) s. 329. https://dergipark.org.tr/tr/download/article-file/1958088.

  9. Adaş/ Erbay, s. 331.

  10. Adaş/ Erbay, s. 333.

  11. Bruce Baer Arnold, Drew Gough, Turing’s People: Personhood, Artificial Intelligence and Popular Culture, 15 Canberra L. Rev. 1 (2017) s. 4.

  12. Yahya Berkol Gülgeç, “Özne, Hukuk ve Hak”, AHBVÜ Hukuk Fakültesi Dergisi, 28(2), 2024, s. 433. https://dergipark.org.tr/tr/download/article-file/3673350

  13. Gülgeç, s. 434.

  14. Melih Erdoğan, Sıfırıncı Yasa, Muhasebe Bilim Dünyası Dergisi, Eylül 2017; 19(3); s. 755. https://dergipark.org.tr/tr/download/article-file/360710

  15. Isaac Asimov, (2018). Ben, Robot. İstanbul: İthaki Yayınları.

  16. Serhat Can Alkan, Asimov’un 3 Robot Yasası ve Yapay Zekâ Etiği, Hukuk ve Bilişim 3. Nesil Hukuk Dergisi. https://www.hukukvebilisimdergisi.com/asimovun-3-robot-yasasi-ve-yapay-zekâ-etigi/#_ftn1

  17. Onur Orkan Akşit, s. 6.

  18. Akşit, s. 6.

  19. Akşit, s. 7.

  20. Akşit, s. 7.

  21. Arnold, s. 6.

  22. Berrin Akbulut, Yapay Zekâ ve Ceza Hukuku Sorumluluğu, Ankara Hacı Bayram Veli Üniversitesi Hukuk Fakültesi Dergisi, C. XXVII, Y. 2023, Sayı 4, s. 285. https://dergipark.org.tr/tr/download/article-file/3316184.

  23. Akbulut, s. 286.

  24. Akbulut, s. 286.

  25. Akbulut, s. 286.

  26. Arnold, s. 26.

  27. Mesut Hakkı Caşın/ Dursun Al/ Nur Dinemis Başkır, Yapay Zekâ ve Robotların Eylemlerinden Kaynaklanan Cezai Sorumluluk Sorunu, Ankara Barosu Dergisi, 2021/1, s. 21. https://dergipark.org.tr/en/download/article-file/1745379

  28. Caşın/ Al/ Başkır, s. 27.

  29. Caşın/ Al/ Başkır, s. 27.

  30. Caşın/ Al/ Başkır, s. 48.

  31. Caşın/ Al/ Başkır, s. 49.

More Insights

Articletter / GSI Brief

GSI Brief & Legal Brief

GSI Brief 204

Gsi Brief 204

Brief
Read more
GSI Brief 205

Gsi Brief 205

Brief
Read more
GSI Brief 206

Gsi Brief 206

Brief
Read more
GSI Brief 189

Gsi Brief 189

Brief
Read more

Articletter - Winter Issue

LEGAL ACCOUNTABILITY OF SOCIAL MEDIA PLATFORMS IN CASES OF USER DATA LEAKAGE

Legal Accountability Of Social Media Platforms In Cases Of User Data Leakage

2026
Read more
THE ROLE OF LEGAL DUE DILIGENCE IN PROJECT FINANCE

The Role Of Legal Due Diligence In Project Finance

2026
Read more
PROJECT REAL ESTATE INVESTMENT FUNDS

Project Real Estate Investment Funds

2026
Read more
DETERMINATION OF THE EXPERT WITNESS IN ARBITRATION: THE LEGAL NATURE OF THE PARTY AGREEMENT AND THE JURISDICTION OF THE ARBITRAL TRIBUNAL

Determination Of The Expert Witness In Arbitration: The Legal Nature Of The Party Agreement And The Jurisdiction Of The Arbitral Tribunal

2026
Read more