ABSTRACT
This article will look at the challenges that have developed as a result of the progress of artificial intelligence technologies, as well as the concerns of who is responsible for the conduct of these entities in these times and from what legal sources.
I. INTRODUCTION
With the evolving needs of society and the advancement of technology, the user base of artificial intelligence and autonomous applications has been progressively growing. Consequently, the inevitability of damages arising from these developments has become a recognized reality. Given that damages can occur in both material and immaterial forms, their appearances may vary. At this juncture, legal principles kick in for the compensation and indemnification of incurred damages. In this day and age, where artificial intelligence applications are easily accessible and transactions can be conducted even through smartphones held by virtually everyone, it has become imperative to enact regulations in Turkish Law addressing these matters and delineating the boundaries of the associated responsibilities.
II. DEFINITION OF ARTIFICIAL INTELLIGENCE
“The concept of artificial intelligence, first introduced by American computer scientist John McCarthy, can be generally defined as systems capable of performing functions that typically require human intelligence, such as perception, learning, development, creativity, communication, decision-making, and result inference”1. The fundamental objective is to enable machines to mimic human actions and exhibit behaviors specific to human intelligence.
“Attempting to define artificial intelligence is thought to result in technological advances while restricting artificial intelligences”2. There is a notion that maintaining a balance between evolving technologies and the legal domain could be achieved by keeping the definition of artificial intelligence broad. However, it is acknowledged that the expansive nature of these definitions could significantly increase the responsibilities of artificial intelligence users, potentially adversely affecting the development of artificial intelligence technologies3.
III. LEGAL STATUS OF ARTIFICIAL INTELLIGENCE
While there is no consensus in legal doctrine regarding the legal status of artificial intelligence, there is no explicit provision in Turkish Law that regulates this status. Nevertheless, views such as the goods perspective, slavery perspective, and legal entity perspective are among the main perspectives proposed in legal doctrine regarding the legal status of artificial intelligence. Adding to that, the Legal Affairs Committee Robotics Recommendation Report4 published by the European Parliament in 2017 addresses the attribution of electronic personality to artificial intelligence.
A. Goods Perspective
Although the first perpective that emerged in the framework of artificial intelligence technologies is a concept of goods, these technologies may not be subject to rights, but may be objects. Namely, goods are generally owned by real or legal persons. For this reason, the view that artificial intelligence technologies are goods is criticized in the doctrine. One of the important reasons is the problem of who will be held liable for the damages arising from the behavior of artificial intelligence technologies if they are considered as goods5. “According to the goods perspective, artificial intelligence is considered as an object created to meet the needs of humans. Since it lacks self-awareness, it cannot be legally recognized as a person”6. Therefore, it can be debated whether the legal responsibility of artificial intelligence technologies exhibiting behaviors that can constitute a criminal element, formed based on user-fed data, lies with the user or possessor.
Given these reasons, it can be clearly stated that there is no consensus in legal doctrine regarding the classification of artificial intelligence technologies as goods. From our perspective, defining artificial intelligence technologies as goods will not be a pragmatic solution. Goods are legally defined as “substances with economic value and material existence over which people can exercise dominion”. When this definition and the definitions of artificial intelligence are compared, it is appropriate to say that the existing definition of goods in Turkish law is quite inadequate and regulated without taking these technologies into account. Therefore, as a result of disputes, there is a high probability of ambiguous situations and problematic outcomes.
B. Slavery Perspective
“The increasing resemblance of artificial intelligence to humans has led to the reevaluation of the long-abandoned status of slavery in world history. Due to its ability to serve humans, facilitate various tasks, meet their needs, and possess intelligence, it has been suggested that artificial intelligence could be granted the legal status that slaves were subjected to in Roman law”7.
Philosopher John Locke, on the other hand, argues that since artificial intelligence technologies are created by real individuals, the resulting product is a product of the real person’s labor, and therefore, only the producers have ownership rights over the product they create. “Therefore, if artificial intelligence were to be considered a person, it is debated that it could only be a slave to its creator”8. The essence of the slavery perspective is to grant slavery status to artificial intelligence technologies, aiming to avoid personality infringement and support the object perspective. However, due to the unpredictable development of artificial intelligence technologies, possibilities of rejecting slavery status in the future have been suggested, and for this reason, a skeptical view is taken towards the slavery perspective. Another reason for the skeptical view of this perspective is the fear that it does not provide a solution to legal issues and the concern that the centuries-long struggle against slavery might reappear in history books.
C. Legal Entity Perspective
The question of granting legal entity status to artificial intelligence technologies has been addressed due to the impossibility of attributing real personality to them. “While it is argued that recognizing legal entity status for artificial intelligence will protect third parties from adversities, it is also emphasized that artificial intelligence, by having an independent asset and being registered in a special registry, will be subject to control”9. Although the analogy between the mechanism among companies and boards of directors supports this perspective, the idea of granting legal entity status to artificial intelligence is highly criticized, considering the challenging period even for companies, foundations, and associations to be granted legal entity status.
Currently, there is no regulation in Turkish legislation regarding the granting of legal entity status to artificial intelligence. According to Article 47 of the Turkish Civil Code, legal entities are defined as “Organized communities of persons, consisting of an individual entity, and independent property communities, which have a specific purpose, acquire legal personality in accordance with special provisions applicable to them”10. When examining the article, it is apparent that artificial intelligence technologies do not quite fit this definition. Although the perspective of granting legal entity status to artificial intelligence technologies may seem like the most ideal solution on the surface, it is not overlooked that legal entities managed by real persons have human will behind them.
D. Electronic Personality Perspective
The idea of granting artificial intelligence electronic personality, as mentioned in the Robotics Recommendation Report of the Legal Affairs Committee published by the European Parliament in 2017, has brought a new perspective to the existing views on the subject. According to the report, as artificial intelligences act with the consciousness they develop independently of the creator as data flows, it is a perspective that can be considered with the emergence of the most intelligent and advanced artificial intelligence.
Through this new personality granted to artificial intelligence technologies, the report suggests recording them in a register and proposing the establishment of specific insurance funds for these entities in terms of financial liability11. Compensation for damages incurred by the actions of artificial intelligence technology would be less problematic, as the causal link between the actual harm caused by artificial intelligence technology and its actions would be sufficient. According to a point mentioned in the report, for the best artificial intelligence technologies to emerge, regulations should be incentivizing rather than restrictive. Thus, by adopting legal responsibility and attributing electronic personality status to artificial intelligence technologies, the way for technological developments will be opened.
Considering the probable addition of new regulations to the legislation of many countries, including Turkish law, in the field of artificial intelligence due to advancements, the placement of artificial intelligence products, which cannot be precisely classified under personality theories, under a new personality class and adapting this class to current conditions and requirements with the forthcoming legislation can also provide solutions to many issues in terms of liability law.
IV. PRODUCER’S LIABILITY IN ARTIFICIAL INTELLIGENCE
As technology advances, the question of who is responsible for the potential harms caused by these entities without a sense of fault has become a significant issue. “If legal violations caused by artificial intelligence are taken to court, according to Article 36 of the Constitution, which states, ‘Everyone has the right to litigate, defend and be fairly judged before the judicial authorities by legitimate means and methods. No court can refrain from hearing the case within its duty and authority.’ the lawsuit must be concluded"12.
In such lawsuits, the party usually held responsible for the damage is the producer, although it may not always be the producer who caused the harm. In some cases, the responsibility or contributory negligence of users, programmers, importers, or distributors may be discussed.
A. Strict Liability of the Producer
The primary factor leading to the imposition of strict liability is that, instead of intentional acts causing harm with intent or actual negligence, it is sufficient for there to be a violation of the duty of care or a breach of supervision. “Strict liability” emerges in three forms: “fair liability” with the violation of the objective duty of care, “ordinary cause liability” arising from the appropriate causal connection between the violation of the duty of care and damage, and “danger liability” born from the appropriate causation between dangerous operations and harm”13 In Turkish law, due to the absence of specific legal regulations for damages arising from the actions of artificial intelligence, it will be possible and appropriate to rely on existing legal principles when legal issues arise. Existing legal regulations will be applied only when there is a specific person or persons involved in the creation, use, distribution, etc., causing the damage resulting from the actions of artificial intelligence.
Currently, the available legal provisions will be applied within the framework of damages caused by the actions of artificial intelligence when there is a specific individual or individuals, such as the producer, user, distributor, involved in the occurrence of harm. While the first person that comes to mind for the compensation of damages arising from the actions of artificial intelligence is often the producer, the situation may not always be this way. Another viewpoint in Turkish legal doctrine suggests that the application of existing strict liability provisions to artificial intelligence technologies would be sufficient for tort law purposes.
B. Producer’s Non-Contractual Strict Liability
“Alternatives based on the strict liability principle are assessed in the doctrine on “Non-contractual liability” in the context of damages caused to third parties by autonomous actions of artificial intelligence”14.
“In the law of liability, the main principle is that everyone is responsible for their own actions. The responsibility of the employer arises not from the employee’s actions but from the failure to fulfill the passive duty of care”15. In other words, there is a causation link sought between the harm caused by artificial intelligence and the violation of the duty of care. Additionally, within the context of Turkish Code of Obligations16, Article 66 (Employer’s Liability), it is evident that the article is only applicable to natural persons. In this context, it will not be possible to refer to the liability of the employer mentioned in Article 66 of the TCO for the damages caused by artificial intelligence technologies, which currently do not have a legal personality.
However, it should be noted that the doctrine has decided that artificial intelligences can be evaluated within the scope of the animal keeper’s liability instead of the manufacturer, given that they can exhibit autonomous movements and act in accordance with the data fed to them by the user, even if they lack a consciousness of fault. The problem with this theory is that if the producer can prove exoneration, the victim cannot appeal to the liability of artificial intelligence technologies. Detecting the responsibility of the producer becomes quite challenging when the artificial intelligence system has learned on its own, become relatively autonomous, and consequently made a decision causing harm. “For example, Microsoft’s artificial intelligence named Tay, after being created by Microsoft, started chatting randomly with people worldwide through its Twitter account and learned to be racist and sexist through these conversations. Microsoft had to shut down Tay due to its loss of control”17.
Due to these reasons, especially considering the unpredictable nature of the actions of artificial intelligence technologies that develop based on user-fed data and have an autonomous structure, the strict liability of the producer should be limited.
V. RESPONSIBILITY OF THE MANUFACTURER UNDER THE LAW ON PRODUCT SAFETY AND TECHNICAL REGULATIONS
As previously mentioned, due to the lack of legal status for the damages caused by artificial intelligence technologies, there is also no corresponding liability. In this scenario, although the principles of strict liability for the producer are referred to, the Product Safety and Technical Regulations Law18 is one of the legal bases that a person harmed by artificial intelligence can rely on. While the law holds the producer responsible for damages, it also includes provisions regarding situations where the producer’s liability is eliminated.
Compensation liability arising from a defective product is regulated in PSTRL Article 6. The occurrence of harm does not necessarily mean compensation will always be granted. According to Article 6/2, “To hold the producer or importer liable, the injured party must prove the damage suffered and the causal link between the defect and the damage”19. To put it another way, it can be said that the harm must result from the defect. In this context, it should be noted that even if the victim claims a defect in the product and proves the causation link and harm, the producer may be exempt from liability if they can prove that the product complies with technical regulations. These situations include the product not being placed on the market by the will of the producer, the product not being defective when it was placed on the market, or the defect occurring later due to the actions of a third party or the non-compliance of the product with technical regulations or other mandatory technical rules.
However, even if the producer cannot escape liability, there may be situations that lead to a reduction in compensation. The first is the presence of the victim’s fault. The victim’s willingness to accept the damage due to their own actions, along with the actions of the producer or the person causing the harm, is one of the reasons for reducing compensation. Another reason is based on examining the situation by comparing the victim’s actions to those of a reasonable person20. “The victim should use the product reasonably”21. “For example, if an AI-powered oven is explicitly stated to only cook food, reheating a mask for reuse, despite being labeled as a COVID-19 protective mask, would not be considered reasonable use, and the producer would not be liable”22. Another situation is when the damage arises from the actions of a person whose liability the victim bears.
With the legalization of the Product Safety and Technical Regulations Law, a specific regulation has been made concerning the producer’s liability, replacing the regulations previously applied within the consumer framework23. However, it is important to note that it is still insufficient to address the damages caused by artificial intelligence technologies.
VI. CONCLUSION
The ability to make decisions by thinking, a characteristic unique to humans, has been adapted to artificial intelligence entities with the advancement of technology, enabling these entities to exhibit autonomous movements. It has become necessary to determine who is responsible for the potential harms arising from the actions of artificial intelligence itself, its producer, user, or a third party. In this regard, various solutions have emerged, leading to different perspectives in legal doctrine.
Ultimately, establishing legal status and determining the adopted personality view for artificial intelligence technologies will pave the way for comprehensive legal regulations. These regulations, developed in line with the consensus on these fundamental questions, will provide a clear solution regarding the responsibility for the harms caused by artificial intelligence technologies.
BIBLIOGRAPHY
MERVE SALI EROĞLU, Sorumluluk Hukukunda Yapay Zeka, Yüksek Lisans Tezi, 2022.
İLYAS SAĞLAM/ EMRE GİRGİN, “Yapay Zeka ve Sözleşme Dışı Kusursuz Sorumluluk”, Antalya Bilim Üniversitesi Hukuk Fakültesi Dergisi, C. 10, S. 19, 2022.
ERMAN BENLİ/ GAYENUR ŞENEL, Yapay Zekâ ve Haksız Fiil Hukuku, Ankara Sosyal Bilimler Üniversitesi Hukuk Fakültesi Dergisi, C.: 2, S.: 2, 2020.
ÇAĞLAR ERSOY, Robotlar, Yapay Zeka ve Hukuk, 2. Baskı, İstanbul 2017.
ALP ÖZTEKİN, Teoride ve Uygulamada Türk İnternet Hukuku, 2. Baskı, Ankara 2023.
ECEM AKSOY, Yapay Zekânın Sorumluluk Hukukundaki Konumu ve Büyük Veri İle İlişkisi, 1. Baskı, Ankara 2022.
REMZİ DEMİR, 7223 Sayılı Ürün Güvenliği ve Teknik Düzenlemeler Kanunu (ÜGTDK) Açısından Yapay Zeka İmalatçısının Ürün Sorumluluğu, 1. Baskı, Ankara, 2023.
ÇİĞDEM KIRCA, Ürün Sorumluluğu, 1. Baskı, Ankara 2007.
MUSTAFA ZORLUEL, Yapay Zeka ve Telif Hakkı, Türkiye Barolar Birliği Dergisi, S. 142, 2019.
SALİH KARADENİZ, “Yapay Zekanın Anonim Şirket Yönetim Kuruluna Üyeliği ve Üyelikten Doğan Sorumluluğun Değerlendirilmesi”, Türkiye Adalet Akademi Dergisi, S.54, 2023
SİNAN S. AKKURT, “Yapay Zekanın Otonom Davranışlarından Kaynaklanan Hukuki Sorumluluk”, Uyuşmazlık Mahkemesi Dergisi, Yıl 7, S. 13, 2109.
GÜLER GÖKSU, Adam Çalıştıranın Sorumluluğu, Yüksek Lisans Tezi, 2021.
FOOTNOTE
1 p. 308 cited from, Mustafa Zorluel, Yapay Zeka ve Telif Hakkı, Türkiye Barolar Birliği Journal, Vol. 142, 2019, Yanisky-Ravid Shlomit, “Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era: The Human-like Authors Are Already Here: A New Model”, Michigan State Law Review.
2 Ecem Aksoy, Yapay Zekânın Sorumluluk Hukukundaki Konumu ve Büyük Veri ile İlişkisi, 1st Edition, Ankara 2022, p. 38-39.
3 p. 262 cited from Salih Karadeniz, “Yapay Zekanın Anonim Şirket Yönetim Kuruluna Üyeliği ve Üyelikten Doğan Sorumluluğun Değerlendirilmesi”, Türkiye Adalet Akademi Journal, Vol. 54, 2023: Aksoy, Büyük Veri, p. 15-16.
4 European Civil Law Rules in Robotics (Online Access: 26.02.2024).
5 p. 51 cited from İlyas Sağlam/ Emre Girgin, “Yapay Zeka ve Sözleşme Dışı Kusursuz Sorumluluk”: Erman Benli/ Gayenur Şenel, “Yapay Zeka ve Haksız Fiil Hukuku”, Ankara Sosyal Bilimler Üniversitesi Journal Hukuk Fakültesi Journal, Vol. 2, 2020, p. 309.
6 p. 51 cited from Sağlam/ Girgin, “Yapay Zeka ve Sözleşme Dışı Kusursuz Sorumluluk”: Zeynep Dönmez Canseven, Yapay Zekanın Anonim Şirketin Yönetim Kurulunda Yer Alması ve Bu Durumun Hukuki Sorumluluğa Etkisi, p. 32-33.
7 p. 30 cited from Merve Salı Eroğlu, Sorumluluk Hukukunda Yapay Zeka, LLM Thesis, 2022: European Parliament’s Committee on Legal Affairs, European Civil Law Rules in Robotics, Study for the JURI Committee, October 2016, http://www.europarl.europa.eu/ RegData/etudes/STUD/2016/571379/ IPOL_STU(2016)571379_EN.pdf (Online Access 02.01.2024) p. 14, 15.
8 Sağlam/ Girgin, “Yapay Zeka ve Sözleşme Dışı Kusursuz Sorumluluk”, p. 52. Lawrence B. Solum.: “Legal Personhood for Artificial Intelliğences”, North Carolina Law Review, 70/4, 1992, p. 1277.
9 p. 86 cited from: Remzi Demir, 7223 Sayılı Ürün Güvenliği ve Teknik Düzenlemeler Kanunu (ÜGTDK) Açısından Yapay Zeka İmalatçısının Ürün Sorumluluğu, 1st Edition, Ankara 2023: Emre Bayamlıoğlu, “Akıllı Yazılımlar ve Hukuki Statüsü: Yapay Zeka ve Kişilik Üzerine Bir Deneme”, Türkiye Barolar Birliği Journal, Vol. 147, 2008, p. 139; Shubham Singh, “Attribution of Legal Personhood to Artifically Intelligent Beings”, Bharati Law Review, July 2017, p. 198.
10 Turkish Civil Code numbered 4721 art. 47.
11 Çağlar Ersoy, Robotlar, Yapay Zekâ ve Hukuk, 2nd Edition, İstanbul 2017, p. 89.
12 Alp Öztekin, Teoride ve Uygulamada Türk İnternet Hukuku, 2nd Edition, Ankara 2023, p. 447.
13 p. 53 cited from Merve Salı Eroğlu, Sorumluluk Hukukunda Yapay Zeka: Fikret Eren, Borçlar Hukuku Genel Hükümler, 24th Edition, Ankara 2019, p. 690; Ahmet M. Kılıçoğlu, Borçlar Hukuku Genel Hükümler 25th Edition, Ankara 2021, p. 419.
14 p. 48 cited from Sinan S. Akkurt, “Yapay Zekanın Otonom Davranışlarından Kaynaklanan Hukuki Sorumluluk”, Uyuşmazlık Mahkemesi Journal, Yıl 7, Vol. 13, 2109: Yüksel Bozkurt/ Ebru Armağan : “Patent Uyuşmazlıklarının Çözüm Yolları Milletlerarası Tahkim ve Devlet Yargısı”, Ankara 2009, p. 96.
15 p. 56 cited from Güler Göksu, Adam Çalıştıranın Sorumluluğu, LLM Thesis, 2021: Kemal Oğuzman / Turgut Öz, Borçlar Hukuku Genel Hükümler, 10th Edition, İstanbul 2013, p. 148.
16 Turkish Code of Obligations numbered 6098 (TCO).
17 Başak Bak, “Medeni Hukuk Açısından Yapay Zekanın Hukuki Statüsü ve Yapay Zeka Kullanımından Doğan Hukuki Sorumluluk”, Türkiye Adalet Akademi Journal, Yıl 9, Vol. 35, p. 216-127.
18 Product Safety and Technical Regulations Numbered 7223 (PSTRL)
19 PSTRL 6/2.
20 Demir, ÜGTDK Açısından Yapay Zeka İmalatçısının Ürün Sorumluluğu, p. 195-196.
21 Çiğdem Kırca, Ürün Sorumluluğu, 1st Edition, Ankara, 2007.
22 Demir, ÜGTDK Açısından Yapay Zeka İmalatçısının Ürün Sorumluluğu, p. 196.
23 Demir, ÜGTDK Açısından Yapay Zeka İmalatçısının Ürün Sorumluluğu, p. 210.








