Animated LogoGöksu Safi Işık Attorney Partnership Logo First
Göksu Safi Işık Attorney Partnership Logo 2Göksu Safi Işık Attorney Partnership Logo

Insights
GSI Articletter
GSI Brief

EXAMINATION OF THE REASONED DECISION DATED 15 MAY 2025 OF THE ISTANBUL 14TH COMMERCIAL COURT OF FIRST INSTANCE REGARDING THE USE OF ARTIFICIAL INTELLIGENCE DURING JUDICIAL PROCEEDINGS

GSI Brief 189

Download as PDF
Share
Print
Copy Link

EXAMINATION OF THE REASONED DECISION DATED 15 MAY 2025 OF THE ISTANBUL 14TH COMMERCIAL COURT OF FIRST INSTANCE REGARDING THE USE OF ARTIFICIAL INTELLIGENCE DURING JUDICIAL PROCEEDINGS

AI Consultancy
January 2026
DAVUT ELBASAuthor
00:00
-00:00

A. Abstract

This brief analyses the legal and ethical dimensions of the use of artificial intelligence in the Turkish judiciary by centring on the Reasoned Decision of the Istanbul 14th Commercial Court of First Instance dated 15 May 2025 (Case No. 2023/856, Decision No. 2025/415). By transparently stating in its reasoning that artificial intelligence was employed as a technical tool for researching foreign law sources in the context of an action for the enforcement of a foreign court judgment, the decision goes beyond an approach that merely conceptualises artificial intelligence as a neutral “tool” . In this context, the compatibility of the decision with national and international ethical standards as well as with the fundamental procedural principles of Law No. 6100 on the Code of Civil Procedure (CCP) is examined. Ultimately, this pioneering decision is assessed as possessing an exemplary character, offering a responsible, balanced, and normatively well-grounded roadmap for the digital transformation of the Turkish judicial system.

I. INTRODUCTION

The integration of artificial intelligence (“AI”) into judicial systems constitutes one of the most dynamic and controversial fields of modern legal doctrine. The decision of the Istanbul 14th Commercial Court of First Instance numbered E. 2023/856, K. 2025/415 and dated 15 May 2025 (the “Reasoned Decision”) emerges as a turning point in Turkish judicial practice. In a case with a technical and international dimension, such as the recognition and enforcement of a foreign court judgment, the court’s explicit statement in its reasoning that artificial intelligence was used as a research and verification mechanism constitutes a concrete and pioneering precedent regarding how this technology may be integrated into judicial proceedings.

This brief analyses the role of artificial intelligence in judicial proceedings by centring on the said Reasoned Decision. In this context, the compatibility of the court decision with national and international ethical standards, its legitimacy in terms of the fundamental principles of the Turkish Code of Civil Procedure No. 6100 (“CCP”), and its stance vis-à-vis potential risks such as algorithmic bias and the “black box” problem will be examined.

II. THE THEORETICAL FRAMEWORK AND INTERNATIONAL STANDARDS OF THE USE OF ARTIFICIAL INTELLIGENCE IN JUDICIAL ACTIVITY

1. The Potential of the Use of Artificial Intelligence in Judicial Proceedings and Fundamental Concepts

Artificial intelligence generally refers to systems capable of performing cognitive functions specific to human intelligence, such as learning, reasoning, classification, pattern recognition, and problem-solving, through algorithms and data processing techniques.1. In the field of law, AI applications are predominantly based on natural language processing, machine learning, and big data analytics techniques. These technologies hold significant potential, particularly in judicial activities, which constitute a text-intensive domain.

The primary promise of the use of AI in judicial proceedings lies in accelerating adjudication, reducing the increasing workload of judges and courts, efficiently scanning large volumes of case-law and legislative datasets, and enhancing consistency in decision-making processes2. Especially in complex commercial disputes, researching sources related to foreign legal systems or comparative law, examining numerous court decisions, and verifying their currency require substantial time and effort. AI-based tools are capable of contributing to the conclusion of proceedings within a reasonable time by accelerating such research and analysis processes.

However, judicial activity is not merely a technical data-processing process. Judicial decision-making is, in essence, a human and normative activity that involves the interpretation of legal norms, the assessment of the specific circumstances of the case, the consideration of the balance of interests between the parties, and often evaluations of equity. For this reason, the potential of AI in judicial proceedings should be approached through a framework that positions it not as a “decision-maker” but as an auxiliary tool supporting the judge’s decision-making process. However, this “tool” metaphor may not fully reflect the potential of AI. From a more advanced perspective, AI may also be regarded as an “artificial limb” that enables the judge to recognise and transcend cognitive limitations and illuminates visual and analytical blind spots3.

Within this framework, one of the fundamental concepts concerning the use of AI in judicial proceedings is the notion of “human-centred artificial intelligence”. The human-centred approach is based on the principle that AI is not the final decision-making authority and that human agency and oversight must be preserved at every stage of the decision-making process4. The role of AI should be limited to compiling, organising, and rendering legal information accessible, while the authority to conduct normative assessments and derive legal conclusions must always remain with the judge.

2. Models of the Use of Artificial Intelligence in Judicial Proceedings

With regard to the use of AI in judicial proceedings, it is possible in doctrine and practice to identify different models depending on the role assumed by AI5. These models are significant for assessing the degree of AI’s impact on the judicial process and the legal risks it may entail:

(a) Judge-Assisting Artificial Intelligence (Supportive Model): Under this model, AI does not in any way interfere with the judge’s decision-making authority and serves solely a research and technical support function. This scope includes activities such as case-law searches, legislative research, access to foreign legal sources, translation tasks, and file and document management. AI presents the information and materials necessary for the judge to conduct legal assessment in a faster and more systematic manner. This model is regarded as the form of use most compatible with the nature of judicial proceedings and carrying the lowest level of legal risk.

(b) Artificial Intelligence Preparing Draft Decisions (Advisory Model): Under this approach, AI may predict potential decision outcomes in relation to the specific dispute based on similar case-law, or provide the judge with draft decisions and reasoning proposals. Although the final decision is still rendered by the judge, in this model AI is seen to penetrate more deeply into the judicial reasoning process. This situation may give rise to drawbacks such as the de facto narrowing of the judge’s discretion, the emergence of a risk of “fabrication” decision-making, and excessive dependence of the judge on AI-generated outputs. For this reason, the implementation of this model without robust procedural safeguards and transparency mechanisms entails significant risks.

(c) Decision-Making Artificial Intelligence (Autonomous Model): Under this model, AI is capable of rendering decisions directly in certain types of disputes without human intervention. Since judicial decision-making is not limited to the mechanical application of legal rules but also requires the assessment of abstract values such as human dignity, the sense of justice, and equity, this model does not align with the current constitutional and procedural framework of Turkish law6. Indeed, the Istanbul Bar Association has also stated that artificial intelligence systems lack elements such as judicial discretion, equity, and conscientious conviction, and therefore cannot be used as a direct decision-making mechanism within the judiciary7.

The Reasoned Decision constitutes an example of a conscious preference for the first model, namely the use of AI solely as a supportive and technical instrument. In this respect, the Reasoned Decision is of precedential importance in that it demonstrates, through a concrete case, the acceptable boundaries of the use of AI in judicial proceedings.

3. National and International Ethical Principles and Standards

In judicial proceedings, the ethical legitimacy of the use of AI is of decisive importance, alongside its legal legitimacy. At the international level, one of the most frequently referenced instruments in this field is the “European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems” adopted by the European Commission for the Efficiency of Justice (“CEPEJ”)8. This document sets out the fundamental ethical principles governing the use of AI in the judicial field and provides a guiding framework for member states, including Türkiye. In addition to these efforts, the recently adopted European Union Artificial Intelligence Act (EU AI Act) has further concretised the regulatory framework in this area by classifying AI systems used to support judicial processes as “high-risk”9.

At the core of this Charter lies the objective of safeguarding the fundamental values of judicial processes, and within this framework it requires compliance with the following core principles: (i) the principle of respect for fundamental rights, under which AI applications must not undermine fundamental guarantees such as the right to a fair trial and access to justice, (ii) the principle of non-discrimination, requiring the prevention of algorithms from reinforcing existing biases and producing discriminatory outcomes, (iii) the principle of quality and security, ensuring that systems operate on reliable and up-to-date data so that erroneous outputs do not affect judicial proceedings, (iv) the principle of transparency and impartiality, requiring AI to operate according to a logic that is auditable and intelligible rather than constituting a “black box”, and (v) the principle of user control (human in the loop), as the most fundamental safeguard, whereby the final decision-making authority and responsibility must in all circumstances remain with the human actor, namely the judge10.

At the national level, the decision of the Ethics Board for Public Officials of the Republic of Türkiye dated 10 September 2024 and numbered 2024/108, entitled “Ethical Principles to Be Observed by Public Officials in the Use of Artificial Intelligence Systems” (the “POEB Ethical Principles”)11, constitutes a binding reference point for members of the judiciary. The principles of transparency, accountability, competence, and human-centredness constitute the core values delineating the boundaries of the use of AI in judicial proceedings. The explicit reference to this ethical framework in the examined Reasoned Decision12 is of particular significance, as it demonstrates that the use of AI is grounded not in arbitrariness but in a normative foundation. The court’s adoption of the principles of “transparency” and “accountability” by reference to the POEB Ethical Principles elevates this use beyond a discretionary technological experiment and may be interpreted as a practice of “epistemic humility,” whereby the judge acknowledges their own cognitive limitations, as well as a form of “blind-spot discipline” aimed at overcoming those limitations13.

III. EXAMINATION OF THE REASONED DECISION OF THE ISTANBUL 14TH COMMERCIAL COURT OF FIRST INSTANCE DATED 15 MAY 2025

1. Summary of the Dispute Subject to the Decision and the Court’s Justification for the Use of Artificial Intelligence

The Reasoned Decision under examination was rendered by the Istanbul 14th Commercial Court of First Instance within the scope of an action filed for the recognition and enforcement in Türkiye of a foreign court judgment. The dispute centres on whether a claim arising from a consultancy agreement governed by Dutch law, which was adjudicated before Dutch courts, may acquire enforceability within the Turkish legal order. The claimant filed the action seeking recognition and enforcement of the final foreign court judgment, while the defendant raised various procedural and substantive objections, primarily alleging the absence of reciprocity pursuant to Article 54(1)(a) of the Turkish Private International Law and Procedural Law Act (PILA)14, that the decision produced results contrary to Turkish public policy, and that the contractual penalty stipulated in the judgment was excessive.

In this context, the dispute before the court went beyond a conventional recognition and enforcement action and necessitated the determination of the current state of foreign legal practice and foreign court case-law. In particular, for the purposes of assessing the reciprocity requirement, it became necessary to establish how Turkish court decisions are treated in practice under Dutch law. This situation rendered it necessary to examine not only legislative texts but also up-to-date decisions of Dutch courts.

Within this framework, the court explicitly stated in its reasoning that it had received technical support from an artificial intelligence tool for the purposes of accessing foreign legal sources, researching foreign court decisions, and verifying their content15. The court’s justification for the use of artificial intelligence was that this tool was employed not to substitute legal evaluation or decision-making, but rather as a technical support instrument facilitating access to foreign legal sources and performing research and verification functions. Accordingly, artificial intelligence was positioned not as a substitute for the court’s legal reasoning, but as an auxiliary element that supplements and accelerates it.

2. The Court’s Legitimation of the Use of Artificial Intelligence in the Reasoning of the Decision

The distinguishing feature of the Reasoned Decision under examination within Turkish judicial practice lies in the court’s explicit disclosure of its use of artificial intelligence in the reasoning of the decision and its attempt to legitimise this use through specific normative grounds. By explaining in detail the purpose and limits of the use of artificial intelligence, the court demonstrated that the inclusion of this technology in the adjudicatory process was neither arbitrary nor implicit.

The first prominent element in the court’s reasoning is the association of the use of artificial intelligence with efficiency and the right to be tried within a reasonable time. The court acknowledges that AI-assisted tools contribute to the efficiency of proceedings by accelerating such research; however, it emphasises that this contribution must remain under judicial supervision and within a limited framework.

Secondly, the court perceived the need to ground the use of artificial intelligence on an ethical basis and, in this context, explicitly referred to the POEB Ethical Principles. Placing particular emphasis on the principles of transparency and accountability, the court articulated this position in the following terms: “...that the use of artificial intelligence was openly disclosed, that all source articles and links were recorded and rendered auditable for appellate review, and that this approach fully complied with the ‘Transparency’ and ‘Accountability’ principles set forth in the Ethics Board Decision...16.

The court also stated that outputs generated by artificial intelligence should not be accepted unquestioningly and that the duty of verification and ultimate responsibility rested entirely with the judge, expressing this as follows: “...that the criteria of accuracy control (pursuant to the provision that ‘outputs generated by artificial intelligence systems should not be used without ensuring their accuracy’, all findings were independently verified...) and control obligation (in accordance with the principle that one is ‘responsible for verifying the accuracy and reliability of the content or decision’, all outputs were reviewed) were complied with...17.

The court further expressly emphasised that artificial intelligence did not become a decision-making element and stated that legal interpretation, assessment of evidence, formation of judicial conviction, and the rendering of the final decision remained entirely within the responsibility of the judge. At this point, artificial intelligence was positioned as a tool facilitating the judge’s access to information, similar to a database, a search engine, or a calculator. This point is explained in the Reasoned Decision as follows: “…that the making of legal interpretations falling within the judge’s exclusive authority, the assessment of evidence, and the formation of judicial conviction … are matters in which the final decision remains within the judge’s discretion … and that artificial intelligence is never used in a decision-making capacity but solely functions as a research support tool…18.

Ultimately, the comprehensive legitimation framework adopted by the court transforms the use of artificial intelligence from a concealed auxiliary tool into a verifiable method explicitly articulated within the reasoning of the decision. This approach not only enables the parties and higher courts to assess the scope and impact of such use, but also demonstrates full compatibility with the principles of transparent and auditable justice endorsed by international organisations such as UNESCO and CEPEJ19.

3. Assessment of the Reasoned Decision from the Perspective of the CCP and Fundamental Principles of Adjudication

The Reasoned Decision must also be assessed in terms of the compatibility of artificial intelligence use with the Code of Civil Procedure and the fundamental principles of adjudication.

Pursuant to Article 31 of the CCP20, the duty imposed on the judge to clarify the dispute requires the judge to conduct all legal and factual inquiries necessary to resolve the dispute accurately and comprehensively. In the decision under review, the use of artificial intelligence is understood to be regarded as an auxiliary tool facilitating the fulfilment of this duty. From this perspective, AI does not restrict the judge’s discretionary power, but rather serves as a supporting element enabling its more sound and effective exercise.

From the perspective of the right to a fair trial, the use of artificial intelligence must be compatible with the principles of equality of arms and adversarial proceedings as regulated under Article 27 of the CCP21. The explicit disclosure by the court, within the Reasoned Decision, of both the use of artificial intelligence and the sources accessed thereby (including ECLI numbers and hyperlinks) renders it, at least in theory, possible for the parties to challenge the content and consequences of such use. This approach prevents AI-assisted research from transforming into a “closed” source of information for the parties and constitutes an important safeguard ensuring the contestability of judicial reasoning, as emphasised in the UNESCO judicial guidance.22.

From the perspective of the principle of the non-delegability of judicial functions, it is observed that the Reasoned Decision carefully delineates this boundary. The court expressly stated that artificial intelligence did not intervene in inherently normative and discretionary domains, such as legal interpretation, the assessment of evidence, or the evaluation of public policy.

Finally, the right to a reasoned decision, as regulated under Article 141 of the Constitution of the Republic of Türkiye No. 270923 and Articles 27 and 297 of the CCP, constitutes one of the most critical points of intersection with the use of artificial intelligence. The court’s explicit disclosure of the use of artificial intelligence within the reasoning enhances both the auditability of the decision and the transparency of judicial activity. A reasoned decision requires not only the explanation of the outcome reached, but also of the method by which that outcome was achieved. In this context, the inclusion of artificial intelligence usage within the reasoning of the decision may be regarded as an approach compatible with the conception of a reasoned decision under both the CCP and the Constitution. In these respects, the decision under review constitutes a significant example demonstrating that the use of artificial intelligence can be positioned in a manner that does not conflict with existing principles of procedural law and, in certain respects, even contributes to their more effective implementation.

IV. LEGAL DEBATES, RISKS, AND OPPORTUNITIES ARISING FROM THE DECISION

1. The Issue of Algorithmic Neutrality and “Algorithmic Bias”

One of the most fundamental legal and ethical debates concerning the use of artificial intelligence in judicial proceedings relates to the neutrality of algorithms and the risk of “algorithmic bias”. AI systems fundamentally possess the capacity to learn and generate outputs based on the datasets provided to them. Where such datasets are incomplete, imbalanced, historically biased, or disproportionately reflect a particular legal culture, the outputs produced by artificial intelligence may likewise reproduce and reinforce similar biases. Indeed, the revelation that artificial intelligence-based risk assessment tools used in the United States criminal justice system designed to predict recidivism risk, such as the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), systematically generated higher risk scores against certain ethnic groups constitutes one of the most concrete examples of this danger24.

From the perspective of judicial activity, this risk becomes particularly pronounced in areas such as foreign law research, comparative law analysis, and case-law review. Where artificial intelligence prioritizes decisions of only certain courts, sources in specific languages, or particular publication outlets, legal reality may be presented in an incomplete or one-sided manner. This may result in an unintentional narrowing of the information set presented to the judge and may, indirectly, influence the decision-making process.

In the Reasoned Decision under review, the court’s positioning of artificial intelligence solely as a technical support and research tool does not entirely eliminate the risk of algorithmic bias; however, it significantly limits the impact of such risk. This is because the outputs generated by artificial intelligence are not treated as the direct basis of the final decision, but are instead subjected to the judge’s independent evaluation. Nevertheless, should the use of artificial intelligence become more widespread in the future, it appears inevitable that the risk of algorithmic bias will need to be addressed in a more systematic manner, particularly through the establishment of standards concerning the data sources employed.

2. Concerns Relating to Transparency and Accountability

Another fundamental area of debate concerning the use of artificial intelligence in judicial proceedings relates to transparency and accountability. Many artificial intelligence systems, particularly those based on machine learning, do not render their decision-making or output-generation processes fully explainable to human users. This phenomenon is commonly referred to in the literature as the “black box” problem25. From the perspective of judicial activity, the black box problem carries an especially sensitive character. This is because the legitimacy of judicial decisions depends not only on the correctness of the outcome, but also on the explainability of the reasons and methods through which that outcome is reached.

In the Reasoned Decision under review, the court’s explicit disclosure of the use of artificial intelligence in the reasoning and its detailed explanation of the scope of such use constitute a positive example in terms of transparency and accountability. The court did not conceal the purpose for which artificial intelligence was used; on the contrary, it incorporated the nature and limits of such use into the reasoning of the decision, thereby subjecting it to review. However, transparency should not be limited solely to the disclosure of the fact of use. In particular, it is a well-known risk that generative artificial intelligence tools may produce erroneous, outdated, or fictional (“hallucinated”) information. Indeed, recent instances in the United States where federal judges acknowledged that generative artificial intelligence tools used by court staff produced non-existent precedents or factually incorrect draft judicial decisions have concretely demonstrated how dangerous the uncontrolled use of such technologies may be26. Accordingly, where artificial intelligence produces an erroneous translation, an incomplete case-law review, or presents an outdated foreign judgment, the judge bears the essential responsibility to verify such outputs against reliable sources27. The question of who bears responsibility for harm arising from an erroneous artificial intelligence output, whether the developer of the system or the court, also constitutes a significant legal debate. The court’s assumption of ultimate responsibility and its treatment of artificial intelligence solely as a “tool” offer a pragmatic solution to this responsibility issue within the existing legal framework. Nevertheless, this does not imply that the judge may accept artificial intelligence outputs uncritically; on the contrary, it further intensifies the obligation to independently verify and supervise all information produced28.

3. The Human Dimension of Judicial Proceedings and Social Acceptance

Judicial activity does not consist solely of the application of legal norms, but is also directly connected to social values, the sense of justice, and individuals’ trust in the judiciary. For this reason, the use of artificial intelligence in judicial proceedings must be assessed not only in terms of technical accuracy, but also with regard to social acceptance and the human dimension of adjudication.

The judge’s judicial conviction, equitable assessment, and intuitive ability to weigh the specific circumstances of a concrete case constitute indispensable components of the judicial decision-making process. No matter how advanced they may be, artificial intelligence systems do not possess the capacity to substitute these human elements29. This view is also shared by professional organizations such as the Istanbul Bar Association, which emphasize that artificial intelligence is inadequate in evaluating abstract elements such as conformity with the ordinary course of life, proportionality, and fairness30.

In the Reasoned Decision under review, the court’s explicit emphasis that artificial intelligence is not a decision-making authority but merely a supportive tool may be regarded as a conscious choice aimed at mitigating this perceptual risk. The emphasis placed in the reasoning on the role and responsibility of the judge serves to preserve the human character of judicial proceedings. In this respect, the decision demonstrates that the use of artificial intelligence is possible without undermining the “human” nature of adjudication.

4. Opportunities for the Turkish Judicial System: Expediting Proceedings and Consistency of Case Law

In addition to the risks associated with the use of artificial intelligence, the decision under review also renders visible significant opportunities for the Turkish judicial system. In particular, in disputes involving a foreign element, such as recognition and enforcement proceedings, access to foreign legislation and case law often constitutes one of the most time-consuming stages of the proceedings. In the present case, the process of researching foreign court decisions for the purpose of determining de facto reciprocity illustrates the concrete benefits of using artificial intelligence.

In such complex and technical matters, artificial intelligence may reduce the routine research burden on judges, enabling them to focus more extensively on the merits of the dispute and the substantive legal assessment. Moreover, AI-assisted case-law search and analysis tools may, in the long term, contribute to enhancing consistency in judicial precedents. The easier identification of inconsistencies among decisions rendered by different courts in similar disputes may strengthen the predictability of the judicial system and legal certainty31. Türkiye’s extensive digital judicial data accumulation through the National Judiciary Informatics System (UYAP) presents a unique potential for the development of domestic and national artificial intelligence tools in this field. Within this framework, the decision under review demonstrates that artificial intelligence, rather than constituting a threat to the Turkish judicial system, may offer a significant area of opportunity when employed within appropriate boundaries and with adequate procedural safeguards.

V. CONCLUSION

The Reasoned Decision constitutes a pioneering example that concretizes the use of artificial intelligence in judicial proceedings in Türkiye. The court employed artificial intelligence not as a decision-making mechanism, but in a limited manner as a supportive and technical tool facilitating source identification, translation, and verification processes in the context of foreign law and case-law research. This approach is compatible with the judge’s duty to clarify the case under Article 31 of the CCP and, due to the explicit disclosure of the method used in the reasoning, provides a positive foundation in terms of transparency and reviewability within the scope of Article 27 of the CCP.

Nevertheless, algorithmic bias, the “black box” problem, and risks specific to generative artificial intelligence, such as error and hallucination, require particular caution in the context of judicial proceedings. Accordingly, even where artificial intelligence is utilized, the responsibility for verification and the final discretionary authority must, in all circumstances, remain with the judge. The reference made in the Reasoned Decision to the POEB Ethical Principles is of particular importance in placing the use of artificial intelligence on a normative foundation structured around transparency, accountability, and human-centricity.

B. KEY TAKEAWAYS

(1)Artificial intelligence should be positioned in judicial proceedings not as a decision-making authority, but as a supportive tool used under the supervision of the judge.

(2)The Reasoned Decision constitutes one of the first concrete examples demonstrating that artificial intelligence may be used transparently for technical support purposes in foreign law and case-law research.

(3)In the use of artificial intelligence in judicial proceedings, the principle of human-centricity (human-in-the-loop) constitutes the fundamental safeguard; the final decision and responsibility must always remain with the judge.

(4)The explicit disclosure of artificial intelligence use in the reasoning of the decision constitutes an important procedural safeguard in terms of transparency, auditability, and contestability.

(5)Although the risks of algorithmic bias, the “black box” problem, and hallucinations cannot be entirely eliminated, limiting artificial intelligence to a supportive model significantly mitigates these risks.

(6)The supportive use of artificial intelligence is compatible with Article 31 of the CCP concerning the judge’s duty to clarify the case; the disclosure of such use in the reasoning satisfies the requirement of a reasoned judgment under Article 297 of the CCP.

(7)POEB Ethical Principles constitute a fundamental reference point defining the ethical and normative boundaries of artificial intelligence use for members of the judiciary.

(8)In the face of these risks, the obligation to independently verify artificial intelligence outputs and the exclusive authority of final normative assessment must, in all circumstances, remain with the judge.

(9)Due to the human and normative nature of judicial proceedings, the role of artificial intelligence must remain limited in areas requiring equity and the formation of judicial conviction.

(10)For the reliable integration of artificial intelligence into judicial practice in Türkiye, there is a need for a clear legal framework defining usage models and procedural safeguards such as transparency, verification, and contestability.xx

Footnotes

1.Betül Çatal, “The Use of Artificial Intelligence in Decision-Making Processes”, Selçuk University Journal of the Faculty of Law, Vol. 33, No. 1, 2025, pp. 316–317.
2.United Nations Educational, Scientific and Cultural Organization (UNESCO), Guidelines for the Use of AI Systems in Courts and Tribunals, 2025, p. 7. https://www.unesco.org/en/articles/guidelines-use-ai-systems-courts-and-tribunals (Access Date: 07.01.2026)
3.Ramazan Çakmakcı// Ali Başaran, “Artificial Intelligence Appearing for the First Time in the Reasoning of a Turkish Court: The Beginning of the Path from a Tool to a Cognitive ‘Limb’”, 2025. https://legal.com.tr/blog/insan-haklari-hukuku/yapay-zekâ-ilk-kez-turk-mahkemesi-gerekcesinde-aractan-bilissel-uzuva-giden-yolun-baslangici/ (Access Date: 07.01.2026).
4.Ethics Board for Public Officials of the Republic of Türkiye, Principle Decision dated 10 September 2024 and numbered 2024/108, entitled “Ethical Principles to be Observed by Public Officials in the Use of Artificial Intelligence Systems”. https://www.etik.gov.tr/icerikler/2024-108-sayili-ilke-karari-yapay-zekâ-sistemlerinin-kullaniminda-kamu-gorevlilerinin-uymasi-gereken-etik-davranis-ilkeleri/ (Access Date: 07.01.2026); UNESCO, Guidelines, p. 20 (Principle 1.14. Human-centric and participatory design).
5.Çatal, p. 314.
6.Hikmet Bilgin, “An International Perspective on the Use of Artificial Intelligence in Court Decisions and Reflections on Robot Judges”, İnönü University Journal of the Faculty of Law, Vol. 13, No. 2, 2022, p. 416.
7.Istanbul Bar Association Information Technology Law Commission, Declaration on the Use of Artificial Intelligence in the Judiciary, dated 13 December 2022. https://www.istanbulbarosu.org.tr/HaberDetay.aspx?ID=17446&Desc=Yarg%B9da-Yapay-Zek%E2-Kullan%B9m%B9-Hakk%B9nda-Bildiri (Access Date: 07.01.2026)
8.European Commission for The Efficiency of Justice (CEPEJ), European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment, Strasbourg, 2018. https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment (Access Date: 07.01.2026)
9.Federica Casarosa, “Regulation by the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act): an analysis”, JuLIA Handbook: Artificial Intelligence, Judicial Decision-Making and Fundamental Rights, 2024, p. 71.
10.CEPEJ, European ethical Charter, 2018, p. 7-12; Gizem Yılmaz, “Yapay Zekânın Yargı Sistemlerinde Kullanılmasına İlişkin Avrupa Etik Şartı”, Marmara Avrupa Araştırmaları Dergisi, 28(1), 2020, p. 35.
11.POEB Ethical Principlep. https://www.etik.gov.tr/icerikler/2024-108-sayili-ilke-karari-yapay-zekâ-sistemlerinin-kullaniminda-kamu-gorevlilerinin-uymasi-gereken-etik-davranis-ilkeleri/ (Access Date: 07.01.2026);
12.Istanbul 14th Commercial Court of First Instance, Case No. E. 2023/856, Decision No. K. 2025/415, dated 15 May 2025, pp. 15–16. https://www.lexpera.com.tr/Device/ServeDeviceDataPage/?returnUrl=%2fictihat%2fadli-yargi-ilk-derece-mahkemeleri%2fistnbl-14-asliye-ticaret-mahkemesi-e-2023-856-k-2025-415-t-15-5-2025 (Access Date: 07.01.2026)
13.Çakmakcı// Başaran, “Artificial Intelligence Appearing for the First Time in the Reasoning of a Turkish Court”, 2025.
14.Law No. 5718 on International Private and Procedural Law, Art. 54(1)(a): “a) Existence of an agreement, on a reciprocal basis between the Republic of Turkey and the country where the court decision is given or a de facto practice or a provision of law enabling the authorization of the execution of final decisions given by a Turkish court in that country.”.
15.Istanbul 14th Commercial Court of First Instance, Case No. E. 2023/856, Decision No. K. 2025/415, dated 15 May 2025, p. 15-16.
16.Ibid., p. 15-16.
17.Ibid., p. 16.
18.Ibid., p. 16.
19.UNESCO, Guidelines, 2025, p. 19.
20.Code of Civil Procedure (Law No. 6100), Art. 31: “The judge may have the parties make a statement about matters considering materially or legally unclear or contradictory, ask questions, demand the parties to provide evidence in the circumstances required.”.
21.CCP Art. 27: “(1) The parties, the joinders and the others concerned with the lawsuit shall have the right to be heard. (2) This right shall contain a) Having knowledge of the proceedings, b) the right of explanation and proof, c) that the Court shall consider the case in the light of explanations and that its decisions shall be justified tangibly and explicitly.”.
22.UNESCO, Guidelines, 2025, p. 20.
23.Constitution of the Republic of Türkiye (Law No. 2709), Art. 141: “…The decisions of all courts shall be written with a justification …”
24.Marco Gioia, “AI risk assessment tools for criminal justice: risks to human rights and remedies”, in JuLIA Handbook, 2024, p. 93.
25.Gioia, p. 94; Bilgin, p. 416.
26.The United States Senate Judiciary Committee Press, “Grassley Releases Judges’ Responses Owning Up to AI Use, Calls for Continued Oversight and Regulation”, 23.10.2025. https://www.judiciary.senate.gov/press/rep/releases/grassley-releases-judges-responses-owning-up-to-ai-use-calls-for-continued-oversight-and-regulation (Access Date: 07.01.2026).
27.UNESCO, Guidelines, 2025, p. 29.
28.Raisul Sourav, “Relying on AI in Judicial Decision-Making: Justice or Jeopardy?”, 2025. https://publicpolicy.ie/papers/relying-on-ai-in-judicial-decision-making-justice-or-jeopardy/ (Access Date: 07.01.2026).
29.Yılmaz, p. 47.
30.Istanbul Bar Association Information Technology Law Commission, Declaration on the Use of Artificial Intelligence in the Judiciary, dated 13 December 2022.
31.Bilgin, p. 412.
More Insights

Articletter / GSI Brief

GSI Brief

GSI Brief 204

GSI Brief 204

2026

Differentiating Competency In The Age Of Legal Technology: the Legal Professional Who Can Ask The Ri

Read more
GSI Brief 205

GSI Brief 205

2026

Communiqué On The Granting Of Establishment Permits To Licensed Warehouse Enterprises

Read more
GSI Brief 206

GSI Brief 206

2026

The Legal Consequences Of Conducting Due Diligence Using Artificial Intelligence In Mergers And Acqu

Read more
GSI Brief 189

GSI Brief 189

2026

Examination Of The Reasoned Decision Dated 15 May 2025 Of The Istanbul 14th Commercial Court Of Firs

Read more