Animated LogoGöksu Safi Işık Attorney Partnership Logo First
Göksu Safi Işık Attorney Partnership Logo 2Göksu Safi Işık Attorney Partnership Logo

Insights
GSI Articletter
GSI Brief

Legal Responsibility Of The Operator Of Artificial Intelligence

2024 - Winter Issue

Download As PDF
Share
Print
Copy Link

Legal Responsibility Of The Operator Of Artificial Intelligence

AI Consultancy
2024
GSI Teampublication
00:00
-00:00

ABSTRACT

Due to the development of technology and the worldwide emphasis on the work carried out in the field of artificial intelligence, artificial intelligence is becoming more prominent in our daily lives. 

I. INTRODUCTION

In the most general terms, artificial intelligence refers to systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect1. The developments in the field of machine learning and the increase in data acquisition, storage and processing opportunities have made the use of artificial intelligence systems widespread in many fields, especially from health to finance, from transportation to industry, and has brought along a rapid change process in related sectors2. It is now necessary to provide artificial intelligence a legal foundation as a result of the change that has been brought forth by a complete grasp of this technology3. In order for a “thing” to have a foundation in the legal system, it must first be legally qualified. Due to the fact that artificial intelligence is a concept that is new and intangible in nature, but creates very strong tangible effects on our lives in addition to this intangibility, difficulties are encountered in its legal characterization. Because at present, it is not easy to legally qualify artificial intelligence with existing evidence, existing thoughts or existing legal structures.

Artificial intelligence subsists by making a series of calculations. As a result of these calculations made by artificial intelligence, erroneous results may occur and damage to third parties may occur. It may even be possible for third parties to be harmed without an erroneous calculation. For example, a self-driving car may crash into an elderly person to save the passengers in it4. With the widespread and development of artificial intelligence, a control mechanism based on human will not be possible, because at this point, artificial intelligence itself may cause some damages. For all these reasons, the legal institution and how the damages caused by artificial intelligence will be remedied must be met in the legal order. In addition, it is necessary to evaluate who will be responsible for the damages caused by artificial intelligence, to what extent, and which legal provisions will be applied in this context.

II. DEFINITON AND NATURE OF ARTIFICIAL INTELLIGENCE

Before addressing the issue of responsibility for damage caused by artificial intelligence, it is necessary to explain what intelligence is The definition of intelligence has been made in the literature as abilities such as learning, thinking and understanding5. In the TDK dictionary, the definition of intelligence is made as “all of the abilities of a person to think, reason, perceive objective facts, judge and draw conclusions, understanding, acumen, astuteness, foresight”6.

Artificial intelligence refers to systems or machines that mimic human intelligence to perform tasks and can iteratively improve themselves based on the information they collect7. In order to better understand the concept of artificial intelligence, it may be useful to compare it with humans in terms of certain characteristics. Humans are affected by the outside world very quickly and have difficulty in conveying their knowledge and experience. Although humans are not very prone to documentation by nature, they have the ability to generate new ideas8. The distinctive and important feature of humans, who are social beings, is that they are sensitive to the environment and lead a life in harmony with the environment. Artificial intelligence can basically perform activities such as thinking, understanding, analysing and acting. In addition, when we look at artificial intelligence, unlike humans, the most important feature is that it can transfer information very quickly and process it very easily. Artificial intelligence is very prone to documentation and can turn into many products with the same features of similar nature. Additionally, the feature of artificial intelligence that can change itself very quickly stands out.

Artificial intelligence approaches events and problems using only the data it contains. For this reason, artificial intelligence does not yet have a sense of social situations, but only a technical sense. It is understood from this that the basis of artificial intelligence is mathematics, probability and statistics. Large amounts of data that cannot be calculated by humans or can be calculated in a very long time can be calculated quickly and easily by artificial intelligence. As a result, specific predictions/detections can be made by artificial intelligence or a task assigned to artificial intelligence can be successfully completed by itself9.

It is possible to divide artificial intelligence into two main types, strong artificial intelligence and weak artificial intelligence10. Strong artificial intelligence has the ability to overcome various problems and can also develop new approaches to solving the given task. Weak artificial intelligence, on the other hand, can only successfully perform certain reasoning and problem-solving tasks. Strong artificial intelligence could be in the form of a machine built like a human and having a similar sensory perception as a human. In this context, strong artificial intelligences have a system of their own that helps them think and perform complex tasks without human intervention. Weak artificial intelligences, on the other hand, are in the form of a pre-designed machine built with predetermined questions and answers.

The regulation proposal, which is considered as the first legal regulation on artificial intelligence, was accepted by the European Union Commission on April 21, 2021. According to this regulation proposal, artificial intelligence is defined as software developed with technical approaches and for content creation and decision-making in line with the purposes defined by humans11. The regulation broadly defines artificial intelligence and imposes specific obligations on actors in different parts of the artificial intelligence chain, from providers to manufacturers, importers, distributors and users of artificial intelligence systems, because many obligations in terms of artificial intelligence systems, which are defined as high risk, are included in the regulation.

Could artificial intelligence have a personality? Or what is the legal nature of artificial intelligence? When these questions are asked, it is seen that four basic views come to the fore12. The first view, and now the more primitive view, is the view that qualifies artificial intelligence as mere objects. The second view is the one that argues that artificial intelligence should be considered in the context of the person and personality rights. The third view, which is called electronic legal entity, is another view put forward to hold artificial intelligence robots that can make their own decisions autonomously. Finally, the fourth is the opinion that it is a work containing intellectual property, that is, an intellectual product.

The first view, the view that describes artificial intelligence as a mere thing, in other words the view of slaves, finds its source in Roman law13. When we look closely at the status of slaves in Roman law, it can be seen that their status is similar to robots, which will no doubt have a place in our lives in the future. In Roman law, the slave could be defined as a kind of emotional property that could think and decide on its own. It can be said that a similar definition will also apply to robots in the future. From this point of view, it will be possible to consider robots as technological relatives of slaves in terms of their legal status. According to this view, artificial intelligences are considered as intelligent goods like slaves.

According to this view, artificial intelligence should also be allowed to have legal existence and to take certain legal actions, since they have the ability of reason and will, just like the slaves in Roman law. However, due to the fact that they do not have the legal capacity, it will be necessary to accept that all the rights they have acquired and the legal responsibility that will arise will be on the master, that is, the owner of the goods, as in the case of slaves. At the point of responsibility under criminal law14, this view cannot offer a complete solution, but it may be possible that the owner of the artificial intelligence is responsible for compensation.

Therefore, it is certain that artificial intelligence cannot be described as a person in this way. As it is known in our law, there are legal persons as well as real persons. At this point, one of the views that emerge when the person view is mentioned is the second view, which claims that artificial intelligence has a legal personality, in other words, it should have a legal personality and should be accepted as such.

According to the second view, which argues that artificial intelligence should have a legal personality, it is never possible for artificial intelligence to be accepted as a real person due to its autonomous and cognitive structures15. Nonetheless, it has been demonstrated that recognizing non-human beings under a legal status as a solution to many issues that arise in terms of artificial intelligence by asserting that legal status is not just acknowledged for humans. Because, as it is known, a legal entity is a legal abstract personality that is outside of human due to some social, economic and many other reasons. According to this view, it is aimed to give artificial intelligence a legal personality like the partners and thus be subject to property.

This view is perhaps one of the most popular views. Against this view, it can be stated that there must be something that constitutes the artificial intelligence legal entity. However, it cannot be said that this is the case in terms of artificial intelligence. Then, perhaps, a completely different concept of legal person will have to be created. When the cases of legal liability, which will be explained in detail below, come to the fore due to artificial intelligence, will the legal person theory be able to adequately explain this situation? Because the legal person is completely isolated from the person who created it. If the legal entity is completely isolated from the persons forming it, will it be possible to compensate the damage caused by artificial intelligence constituting a legal entity from the person who created it?

Another view that emerges at this point is the electronic legal entity view. This view is one of the views included in the advisory report dated 27 January 2017 prepared by the European Parliament’s Committee on Legal Affairs. It is a brand-new kind of entity that is an electronic legal entity. This view was put forward so that robots with artificial intelligence, which can make their own decisions autonomously, can be held responsible. Electronic personality is not one of the legal statuses in existing legal systems, and the view of electronic personality states that a completely new status should be created within legal systems16.

The fourth view focuses on the point that artificial intelligence is also a work. There is no doubt that in our law, artificial intelligence is also a work, because the work can be a tangible asset as well as an intangible asset. After all, artificial intelligence is essentially intangible software.

The electronic legal entity view means the creation of a completely new legal status. Although it may not seem to be applicable given that no legal action has been taken in this direction, it can be argued that this is the most consistent and applicable point of view because it will address all present and potential legal issues. In the report of the European Parliament’s Committee on Legal Affairs, it is foreseen that the electronic legal entity should be registered in a registry just like legal entities and in terms of compensation liability arising from the damages caused by artificial intelligence, it is foreseen to apply to the financial fund to be created as a way of compensation. With this status to be created, artificial intelligence will be able to get into debt, own intellectual property products, be a patent inventor and even own patents. As a result, just as legal personality means rights and responsibilities for partnerships, the same logic may apply to artificial intelligence17.

III. IMPUTABILITY AND THE SUBJECT OF THE COMPENSATION

A. Imputability

As also mentioned above, autonomous artificial intelligence systems are divided into two sub-categories, weak and strong artificial intelligence, and the self-decisions and actions of such systems vary. This situation also brings the liability of artificial intelligence. There is disagreement on the topic of imputability, which refers to whether artificial intelligence can be held accountable. Opinions are present in the literature that the operators of the less autonomous artificial intelligence systems shall be held responsible in terms of the damages caused by the decisions made by these less autonomous artificial intelligences18. However, the same conclusion cannot be reached in terms of the damages caused by the relatively autonomous intelligence artificial intelligence systems with a high level of autonomy. An autonomous artificial intelligence system with its self-developed features; makes it difficult for the artificial intelligence operator to be held liable for the damages it causes by removing the causal connection, along with its complex, non-transparent structure and opaque features. Therefore, an issue of imputability arises in terms of the damages caused by artificial intelligence systems with a high level of autonomy, whilst the decisions of artificial intelligence systems with a low level of autonomy and their results may be attributed to those who benefit from these systems.

In terms of artificial intelligence systems that develop and work with machine learning systems, called as Deep Learning19 or Machine Learning20; the situations where artificial intelligence can learn on its own due to data entry into these systems or data opened for the use by these systems, may improve itself with experience, but besides, it needs various updates by the operator of artificial intelligence, and it also raises the attribution problem21.

The techniques used by artificial intelligence in its learning process and their unpredictable results and non-transparent structure are called the Black Box Effect. In other words, it is experienced that artificial intelligence systems make decisions by taking initiative on their own and cause unpredictable, undesirable consequences22. Therefore, the question required to be asked is whether the existing liability law regimes are sufficient enough to provide this protection and which related liability regimes will be providing compensation for these damages in order to effectively protect the victims.

B. What is the Subject of the Breach and the Compensation?

The unforeseen outcomes and harm created by artificial intelligence through its ability to make individual and autonomous decisions can look as pure economic damage, a violation of fundamental freedoms, and a loss of opportunity and profit. To give an example from around the world and today, an artificial intelligence system that ranks patients who will undergo dialysis in Switzerland must do this according to the creatinine ratio in each individual’s body. However, it made only a binary distinction in the way of separating the creatinine ratios of white-skinned and black-skinned individuals. Since the creatinine ratios of black-skinned individuals are higher, artificial intelligence put it at the end of the list and postponed the benefit of these individuals from dialysis with its autonomous decision. In another example, artificial intelligence systems caused the hiring company to group applicants based on gender, school, and nationality, causing applications from certain groups to be ignored. As seen in these examples, there are different views about who will be held responsible in cases where artificial intelligence causes loss of rights and profits, or even damage to individuals.

IV. THE OPERATOR OF ARTIFICIAL INTELLIGENCE AND ITS LEGAL RESPONSIBILITY

A. Who is the Operator of Artificial Intelligence?

Considering who the operator of artificial intelligence is, it is seen that there is no regulation regarding this issue in our doctrine and literature. Since artificial intelligence systems lack personality, it does not appear practical for our legislation to relate to the autonomous behaviours of artificial intelligence, according to Turkish Civil Code No. 4721 (“TCC”)23. In the proposal of the European Parliament and the Council of the European Union for the Regulation on Harmonized Rules on Artificial Intelligence, the operator of artificial intelligence is defined as a provider, user, authorized representative, importer and distributor24. The producers, operators, and owners are cited as users who can be held accountable in the Recommendation Report of the European Parliament25. In addition, the European Council Directive Regarding the Responsibility of The Manufacturer numbered 85/374 widely accepts26 that artificial intelligence is a product, the manufacturer’s liability will rise but it was stated that the operators shall not be fully responsible because of the cognitive abilities of artificial intelligence27. In the 2020 charter proposal, the European Union legislators stated that the back end, which they define as the operator; who codes, creates and updates the artificial intelligence; and the front end that distributes and provides it, should both be held responsible.

In this context, in order to eliminate the risks of artificial intelligence, the liability of the operator is taken into consideration for the compensation of the damages caused by the artificial intelligence system and furthermore in the doctrine; these responsibilities are analysed under the employer’s liability, tort liability and organizational responsibility.

B. Legal Responsibility of the Operator of Artificial Intelligence

When artificial intelligence28 is evaluated together with the concepts of artificial intelligence and law, the issue of who will be responsible has been the subject of discussion. With the widespread use of artificial intelligence, current and/ or future risks and requirements in the legal system are taken into consideration, and it is concluded that the liability regulation regarding artificial intelligence is necessary. Although there is a tendency to make relevant legal arrangements before the European Parliament29 and the European Commission30, some studies are also carried out in our country31.

Article 66/1 of the Turkish Code of Obligations numbered 6098 (“TCO”), headed Employer’s Liability, provides an opinion on the operator of artificial intelligence’s responsibility for the harm caused by artificial intelligence. In this context, compensation may be sought. Pursuant to Article 66 of the TCO, “the employer is obliged to compensate the damage caused by the employee to others during the execution of the work assigned to him”. Therefore, it has been regulated in the law that the employer can be held responsible for the damages caused by the employee during the execution of a work, and it has become possible to demand the damages caused by the employee from the employer32. There must be a damage in line with this article, this damage should be caused by an action of the employee, and there should be an appropriate causal link between the damage and the wrongful act of the employee.

Although artificial intelligence has the ability to think, develop and execute, it is the operator of artificial intelligence that activates the artificial intelligence. In other words, it may be appropriate to see the artificial intelligence as the employee and the operator as the employer in this respect33. However, a situation that should be considered is that, unlike the relationship between the employee and the employer, a contractual relationship has not been established between the artificial intelligence and the operator of artificial intelligence, and the artificial intelligence does not work for a fee to the operator of artificial intelligence. From this point of view, it will be necessary to say that artificial intelligence cannot be an employee in the context of employer’s liability.

It is among the opinions that the operator of artificial intelligence can compensate the damages caused by the artificial intelligence according to the provisions of tort. Pursuant to article 49 of the TCO, liability arising from tort relates to the breach of an obligation to be complied with not only to a specific person but also to everyone. The damaged party may also demand compensation in accordance with the provisions of tort, in the event that one of the persons damages the other person without any previous or existing relationship with the tortious act34. Likewise, in order to be responsible for tortious act, it is not necessary that any debt relationship has been established between the operator of artificial intelligence and the damaged party. The tortious act regulated in article 49 of the Turkish Code of Obligations has four elements: illegality, causal link, fault and damage. Due to the autonomous nature of artificial intelligence, it does not seem reasonable to look for the fault of the operator of artificial intelligence in the unlawful action caused by the artificial intelligence. For this reason, it would not be right for the operator of artificial intelligence to be responsible for the tort.

Another view regarding the responsibility of the operator of artificial intelligence is the organizational responsibility regulated in article 66/3 of the TCO. In the literature, it is stated by Büyüktanır that the responsibility of the operator may try to be solved in the context of organizational responsibility regulated in article 66/3 of the TCO35. Although organizational responsibility is regulated in the same article as employer’s liability, organizational responsibility is a different concept within its terms and results.

Organizational responsibility, regulates that an organization will be strictly liable for damages that occur within that organization just because it organizes a certain work. In a decision given by the Supreme Court in 1978, the organization was held strictly liable. While trying to place a pole lifted with the help of a crane in a construction, the carrying ring of the crane broke and it fell on a third person below, caused serious injury. In this case, the Court of Cassation did not find it sufficient for the company to inform the employees and inspect the area, and held the company responsible because the injury in question was within the construction of the company36.

When viewed from the perspective of the artificial intelligence operator, it should be examined in terms of organizational responsibility; specifically, whether at least one employee of the artificial intelligence operator is employed in the location where the artificial intelligence is operated, regardless of how it functions, whether there is an organization involved in the operation of artificial intelligence, and whether there has been any harm from this organization. From this perspective, it can be said that organizational responsibility has found a wide application area and is now the subject of decisions of regional courts of appeal37.

V. CONCLUSION

Artificial intelligence, which is quickly advancing and spreading, has a favourable influence on our daily lives and science in a number of fields. The work in the systems it is integrated into is made easier and results are obtained more quickly thanks to artificial intelligence, which has a profound ability to store, process, and utilise data. But, another characteristic of artificial intelligence, the capacity for independent thought and decision-making, renders it independent of the artificial intelligence operator. This situation has led to the necessity of determining the legal nature of artificial intelligence in the doctrine. There are also views that claim that artificial intelligence is just a property, as well as opinions that argue that it should have a legal personality, be real, legal or electronic. In addition to this difference of opinion, which has not been agreed upon yet, who is the operator of artificial intelligence and to whom the damages and losses caused by artificial intelligence to third parties may be attributed have also been the subject of discussion in our law. Regarding the responsibility of the artificial intelligence operator, the views that the artificial intelligence operator will be held responsible within the framework of the employer’s liability, organizational responsibility under the TCO, or finally, tort liability under the general provisions of the TCO are discussed in doctrine and practice, but there is no consensus on this issue yet.

BIBLIOGRAPHY

DAVID CALVERLEY, “Artificial Intelliegence As a Legal Person”, http://www.terasemjournals.org/PCJournal/PC0201/calverley_d.html (Date of Access: 27.11.2022).

MAIA ALEXANDRE, “The Legal Status of Artificially İntelligent Robots”, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2985466 (Date of Access: 27.11.2022).

AYYÜCE KIZRAK, “Motivasyon, Yapay Zeka ve Derin Öğrenmenin Hikayesi”, https://ayyucekizrak.medium.com/motivasyon-yapay-zeka-ve-derin-%C3%B6%C4%9Frenme-48d09355388d (Date of Access: 08.12.2022).

BAŞAK BAK, “Medeni Hukuk Açısından Yapay Zekanın Hukuki Statüsü ve Yapay Zeka Kullanımından Doğan Hukuki Sorumluluk’, TAAD, Y. 9, S. 35, 2018.

MERT BAKIRCI, “Yapay Zeka Hakkında Bir Rehber, Nedir, Ne değildir, Ne Olacaktır”, https://evrimagaci.org/yapay-zeka-hakkinda-bir-rehber-nedir-nedegildir-ne-olacaktir-3667 (Date of Access: 27.11.2022).

SEDA KARA KILIÇARSLAN, “Yapay Zekânın Hukuki Statüsü ve Hukuki Kişiliği Üzerine Tartışmalar”, Yıldırım Beyazıt Üniversitesi Hukuk Dergisi, Yıl 4, S. 2, 2019.

ÖZGÜR TAŞDEMİR/ ÜMİT VEFA ÖZBAY/ B. ONUR KİREÇTEPE, “Robotların Hukuki ve Cezai Sorumluluğu Üzerine Bir Deneme”, Ankara Üniversitesi Hukuk Fakültesi Dergisi, C. 69, S. 2, 2020.

CEREN ÖZBEK/ VELİ ÖZER ÖZBEK, “Yapay Zekânın Dâhil Olduğu Suçlar Bakımından Ceza Hukuku Sorumluluğunun Belirlenmesi”, Ceza Hukuku Dergisi, C. 14, S. 41, 2019.

SİNAN SAMİ AKKURT, “Yapay Zekânın Otonom Davranışlarından Kaynaklanan Hukuki Sorumluluk”, Sayıştay Dergisi, S. 13, 2019.

AVRUPA PARLAMENTOSU VE AVRUPA BİRLİĞİ KONSEYİ, “Yapay Zekaya İlişkin Uyumlaştırılmış Kurallara Yönelik Tüzük Teklifi”, Brüksel, 2021.

ONUR SARI, “Yapay Zekanın Sebep Olduğu Zararlardan Doğan Sorumluluk”, TBB Dergisi, S. 147, 2020.

EUROPEAN COMMISSION, “Report From the Commission to the Europaean Parliament, The Council and The European Economic And Social Committee: Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics.”, 2020.

SELAHATTİN SULHİ TEKİNAY/ SERMET AKMAN/ HALUK BURCUOĞLU/ ATİLLA ALTOP, Borçlar Hukuku Genel Hükümler, 7. Baskı, İstanbul 1998.

HALUK TANDOĞAN, Türk Mesuliyet Hukuku, 1. Baskı, İstanbul 2010.

HALUK N. NOMER, Borçlar Hukuku Genel Hükümler, 18. Baskı, İstanbul 2021.

TÜRKİYE CUMHURİYETİ CUMHURBAŞKANLIĞI DİJİTAL DÖNÜŞÜM OFİSİ BAŞKANLIĞI, “Ulusal Yapay Zekâ Stratejisi” https://cbddo.gov.tr/SharedFolderServer/Genel/File/TR UlusalYZStratejisi2021-2025.pdf (Date of Access: 27.11.2022).

TANYA TİWARİ/ TANUJ TİWARİ/ SANJAY TİWARİ, “How Artificial Intelligence, Machine Learning and Deep Learning are Radically Different?” International Journals of Advanced Research in Computer Science and Software Engineering, C. 8, S. 2, 2018.

NIEVES BRIZ/ ALLISON BENDER, “Key Challenges of Artificial Intelligence: Liability for AI Decisions” https://www.businessgoing.digital/key-challenges-of-artificial-intelligence-liability-for-ai-decisions/ (Date of Access: 27.11.2022).

ERDEM BÜYÜKSAĞİŞ, Hukuk Perspektifinden Yapay Zeka, 1. Baskı, İstanbul 2022.

CEVDET YAVUZ, “Türk Borçlar Kanunu Tasarısı’na Göre Kusursuz Sorumluluk Halleri ve İlkeleri”, Marmara Üniversitesi Hukuk Fakültesi Dergisi, C. 14, S. 4, 2008.

SEFA REİSOĞLU, Türk Borçlar Kanunu Genel Hükümler, 23. Baskı, İstanbul, 2012.

AHMET M. KILIÇOĞLU, Borçlar Hukuku Genel Hükümler, 20. Baskı, Ankara 2016, s. 342.

AMEDEO SANTOSUOSSO/ CHLARA BOSCARATO/ FLORO ERNESTO CAROLEO/ LYNETTE RENE LABRUTO/ CHRİSTOPHE LEROUX, “Robots, Market and Civil Liability: A European Perspective In 2012 September”, The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012.

EMRE BAYAMLIOĞLU, “Akıllı Yazılımlar ve Hukuki Statüsü: Yapay Zeka ve Kişilik Üzerine Bir Deneme”, in: Uğur Alacakaptan’a Armağan, C. 2, İstanbul 2008.

FOOTNOTE

1 Oracle, Yapay Zeka Nedir? (Date of Access; 27.11.2022) https://www.oracle.com/tr/artificial-intelligence/what-is-ai/se.

2 Başak Bak, “Medeni Hukuk Açısından Yapay Zekanın Hukuki Statüsü ve Yapay Zeka Kullanımından Doğan Hukuki Sorumluluk’, TAAD, Y. 9, S. 35, 2018 p. 213.

3 David Calverley, “Artificial Intelliegence As a Legal Person”, (Date of Access27.11.2022) http://www.terasemjournals.org/PCJournal/PC0201/calverley_d.html.

4 Onur Sarı, “Yapay Zekânın Sebep Olduğu Zararlardan Doğan Sorumluluk” TBB Dergisi, S. 147, 2020, p. 252.

5 Paulius Čerka/ Grigienė Jurgita/ Sirbikytė Gintarė, “Liability for damages caused by artificial intelligence”, Computer Law & Security Review, U.K., V. 31., S. 3, 2015, p. 378, nakleden: Sarı, p. 523.

6 Türk Dil Kurumu İnternet Sözlüğü, (Date of Access: 27.11.2022) https://tdk.gov.tr.

7 Oracle, Yapay Zeka Nedir? (Date of Access: 27.11.2022) https://www.oracle.com/tr/artificial-intelligence/what-is-ai/.

8 Mert Bakırcı, “Yapay Zeka Hakkında Bir Rehber, Nedir, Ne değildir, Ne Olacaktır”, (Date of Access: 27.11.2022) https://evrimagaci.org/yapay-zeka-hakkinda-bir-rehber-nedir-nedegildir-ne-olacaktir-3667.

9 Emre Bayamlıoğlu, “Akıllı Yazılımlar ve Hukuki Statüsü: Yapay Zeka ve Kişilik Üzerine Bir Deneme”, in: Uğur Alacakaptan’a Armağan, C. 2, İstanbul 2008, p.135.

10 Alexandre Maia, “The Legal Status of Artificially Intelligent Robots”, (Date of Access: 27.11.2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2985466.

11 Avrupa Parlamentosu ve Avrupa Birliği Konseyi, Yapay Zekaya İlişkin Uyumlaştırılmış Kurallara Yönelik Tüzük Teklifi. Brüksel: 2021.

12 Seda Kara Kılıçarslan, “Yapay Zekânın Hukuki Statüsü ve Hukuki Kişiliği Üzerine Tartışmalar”, Yıldırım Beyazıt Üniversitesi Hukuk Dergisi, Yıl 4, S. 2, 2019, p. 378.

13 Özgür Taşdemir/ Ümit Vefa Özbay/ B. Onur Kireçtepe, “Robotların Hukuki ve Cezai Sorumluluğu Üzerine Bir Deneme”, Ankara Üniversitesi Hukuk Fakültesi Dergisi, C. 69, S. 2, 2020 p. 803.

14 Bu konu hakkında bir çalışma için bkz. Ceren Özbek/ Veli Özer Özbek, “Yapay Zekânın Dâhil Olduğu Suçlar Bakımından Ceza Hukuku Sorumluluğunun Belirlenmesi”, Ceza Hukuku Dergisi, C. 14, S. 41, 2019. p. 603 – 622.

15 Kılıçarslan, Tartışmalar, p. 379.

16 European Parliament, Report with Recommendations to the Commission on Civil Law Rules on Robotics, (Date of Access: 27.11.2022) https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html.

17 Kılıçarslan, Tartışmalar, p. 379.

18 Cevdet Yavuz, “Türk Borçlar Kanunu Tasarısı’na Göre Kusursuz Sorumluluk Halleri ve İlkeleri”, Marmara Üniversitesi Hukuk Fakültesi Dergisi, C. 14, S. 4, 2008; Sefa Reisoğlu, Türk Borçlar Kanunu Genel Hükümler, 23. Baskı, İstanbul, 2012. p.48.

19 Tanya Tiwari/ Tanuj Tiwari/ Sanjay Tiwari, “How Artificial Intelligence, Machine Learning and Deep Learning are Radically Different?”, International Journals of Advanced Research in Computer Science and Software Engineering, C. 8, S. 2, 2018, p. 1.

20 Tiwari, ML and DL. p. 3.

21 Nieves Briz/ Allison Bender, “Key Challenges of Artificial Intelligence: Liability for AI Decisions” https://www.businessgoing.digital/key-challenges-of-artificial-intelligence-liability-for-ai-decisions/ (Date of Access: 27.11.2022).

22 Briz, Bender, Key Challenges. https://www.businessgoing.digital/key-challenges-of-artificial-intelligence-liability-for-ai-decisions/ (Date of Access: 27.11.2022).

23 Sinan Sami Akkurt, “Yapay Zekanın Otonom Davranışlarından Kaynaklanan Hukuki Sorumluluk”, Uyuşmazlık Mahkemesi Dergisi, S. 13, 2019, p. 48.

24 Avrupa Parlamentosu ve Avrupa Birliği Konseyi, Yapay Zekaya İlişkin Uyumlaştırılmış Kurallara Yönelik Tüzük Teklifi. Brüksel: 2021.

25 European Parliament, Report with Recommendations to the Commission on Civil Law Rules on Robotics, (Date of Access: 27.11.2022) https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html.

26 Amedeo Santosuosso/ Chlara Boscarato/ Floro Ernesto Caroleo/ Lynette Rene Labruto/ Christophe Leroux, “Robots, Market and Civil Liability: A European Perspective In 2012 September”, The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012, p. 1051 – 1058.

27 Sarı, Sorumluluk, p. 260.

28 Sam Lehman-Wilzig, “Frankenstein Unbound: Toward a Legal Definition of Artificial Intelligence”, FUTURES: The Journal of Forecasting and Planning, C. 13, S. 6, 1981. p. 443.

29 European Parliament, Study On Civil liability Regime For Artificial Intelligence: European Added Value Assessment https://www.europarl.europa.eu/RegData/etudes/STUD/2020/654178/EPRS_STU(2020)654178_EN.pdf. (Date of Access: 27.11.2022).

30 European Parliament, Report with Recommendations to the Commission on Civil Law Rules on Robotics, https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html. (Date of Access: 27.11.2022).

31 Türkiye Cumhuriyeti Cumhurbaşkanlığı Dijital Dönüşüm Ofisi Başkanlığı, “Ulusal Yapay Zekâ Stratejisi” https://cbddo.gov.tr/SharedFolderServer/Genel/File/TRUlusalYZStratejisi2021-2025.pdf (Date of Access: 27.11.2022).

32 Ahmet M. Kılıçoğlu, Borçlar Hukuku Genel Hükümler, 20. Baskı, Ankara 2016, p. 342.

33 Ayyüce Kızrak, “Motivasyon, Yapay Zeka ve Derin Öğrenmenin Hikayesi”, https://yapayzeka.ai/motivasyon-yapay-zeka-ve-derin-ogrenmeninhikayesi (Date of Access: 27.11.2022).

34 Haluk Tandoğan, Türk Mesuliyet Hukuku, 1. Baskı, İstanbul 2010, p. 66.

35 Sarı, Sorumluluk, p. 286.

36 Erdem Büyüksağiş, Hukuk Perspektifinden Yapay Zeka, 1. Baskı, İstanbul 2022. p. 92 – 94.

37 Büyüksağiş, Yapay Zeka. p. 92 – 94.

  • Summary under construction
Keywords
Artificial Intelligence, Legal Nature Of Artificial Intelligence, Legal Responsibility, Tort, Employer’s Liability, Organizational Responsibility.
Capabilities
AI Consultancy
AI & Disruptive Tech Legal Consultancy
More Insights

Articletter / GSI Brief

GSI Brief & Legal Brief

GSI Brief 204

Gsi Brief 204

Brief
Read more
GSI Brief 205

Gsi Brief 205

Brief
Read more
GSI Brief 206

Gsi Brief 206

Brief
Read more
GSI Brief 207

Gsi Brief 207

Brief
Read more

Articletter - Winter Issue

Assessment Of The December 2022 Regulatıon On Electronıc Commerce

Assessment Of The December 2022 Regulatıon On Electronıc Commerce

2024
Read more
Action For Damages In Case Of Unlawful Exercise Of Control In A Group Of Companies

Action For Damages In Case Of Unlawful Exercise Of Control In A Group Of Companies

2024
Read more
The Evidential Nature Of Electronic Data

The Evidential Nature Of Electronic Data

2024
Read more
The Functions Of The Administrative Ombudsman In Türkiye

The Functions Of The Administrative Ombudsman In Türkiye

2024
Read more
Legal Responsibility Of The Operator Of Artificial Intelligence