ABSTRACT
In this article, as of 2024, the principal AI regulations in the United States, the European Union, and Turkey are compared to examine their differing approaches, regulatory impacts, and effects on sectoral innovation. Global responses are also addressed, and finally, the discussion turns to how comprehensive regulations might be introduced on a global scale in the future.
I. INTRODUCTION
The rapid development of artificial intelligence technologies in recent years has paved the way for significant changes in the economic, social, and political structures of societies. This transformation has made it necessary to develop various regulatory approaches worldwide. Fundamentally, the European Union (“EU”) and the United States of America (“US”) stand out as representatives of two different schools of thought regarding Artificial Intelligence (“AI”) regulations. The EU, with a stronger emphasis on human rights and ethical values and a global influence power referred to by Anu Bradford as the “Brussels effect”1, has taken the lead in this matter, whereas the US pursues a more sectoral and flexible approach aimed at fostering innovation.
A comparative analysis of global AI regulations will be conducted, particularly through the examples of the EU and the US. The EU’s horizontal and inclusive regulations and the impact of the Brussels effect2 on the policies adopted by other countries will be examined, as well as the US’s strategies that prioritize sectoral regulation. Additionally, by discussing the influence of these two schools of thought on global AI regulations, the ways in which they affect other countries in regulating AI systems worldwide—along with the advantages and limitations of different approaches—will be evaluated.
II. THE EUROPEAN UNION APPROACH: THE EU AI ACT AND ITS IMPACT
First, the European Union’s approach stands out in that it aims to provide a framework applicable to all sectors. The European Commission’s method, which can address each sector, is referred to as a “horizontal approach”. The Commission explains this by stating that it is “comprehensive and future-proof” and incorporates mechanisms capable of adapting3. Although the standards and approach the EU seeks to establish appear to succeed in forming a general framework—given the rapid development of AI technologies and their varying impacts—there is concern that this might lead to issues with flexibility when it comes to sector-specific implementations.
A. A Look at EU’s Risk-Based Approach
The EU, through its risk-based approach, aims to classify systems based on their level of risk and to introduce regulations suitable for each level. In doing so, it seeks to strike a balance between promoting innovation and protecting fundamental rights. In addition to aiming to improve the functioning of the internal market, the EU’s Artificial Intelligence Regulation also endeavors to provide a high level of protection in areas such as health, safety, fundamental rights, and environmental protection. In this context, the rationale for the regulation includes the following statements: “The purpose of this Regulation is to improve the functioning of the internal market and promote the adoption of human-centric and trustworthy artificial intelligence, while at the same time ensuring a high level of protection for the health, safety, fundamental rights, democracy, the rule of law, and environmental protection enshrined in the Charter, and supporting innovation while providing safeguards against the harmful effects of AI systems”4.
B. EU’s Global Regulative Leadership
The EU, in the AI race, alongside developing technology, also assumes a leadership position in ensuring that the relevant technologies are ethical, transparent, and accountable in the shaping of global-scale regulations. At this point, we can speak of the fact that the EU, in addition to setting a certain standard for players wishing to enter the European market, intends to draw a framework worldwide. It can be observed that the EU is trying to take a path similar to that played by the General Data Protection Regulation (“GDPR”) in shaping global data protection standards, however, in that it evaluates the risk-based approach adopted between data protection and artificial intelligence regulations in different categories, it can be said that it deviates at some point from the previously adopted “uniform” approaches.
The EU, based on criticisms that the regulations it previously introduced were not flexible, is this time envisaging different regulations for different risk groups, however, there remain questions as to whether this will positively affect technological development in practice. The fact that the EU has taken these regulatory steps early places countries that wish to introduce similar regulations in a position where they can use it as a benchmark. This not only compels actors who want to be present in the markets of countries that have adopted the EU’s practices to comply with these standards solely to be in the EU market, but also, in future regulations that will be established in various countries around the world, to adhere to the EU’s regulations, thereby driving actors to align with EU regulations and emphasizing the EU’s strength in setting policy at this point.
III. APPROACH OF UNITED STATES
The United States has not yet established a single comprehensive regulatory framework for the AI ecosystem. Instead, it seeks to guide the use of emerging technologies by updating existing laws and guidelines or adopting sector-specific legislation (e.g., finance, healthcare, transportation) in order not to stifle innovation. This approach, characterized by “light-touch” and flexible principles, aims to advance rapidly evolving AI applications while keeping regulatory obstacles to a minimum. At the same time, voluntary standards and guidance issued by federal agencies encourage the market’s self-regulatory capacity, and the government remains attentive to potential risks through strategic R&D support and targeted sectoral regulations.
A. Executive Order 13859 (Maintainin American Leadership in Artificial Intelligence)
In 2019 and 2020, the first of two significant executive orders concerning AI regulation in the United States—Executive Order 13859— underscored the importance of AI for national security, economic competitiveness, and scientific leadership, emphasizing the need for the United States to maintain its leadership in these spheres while also highlighting that safeguarding national values and security is an integral part of this effort5.
Furthermore, it calls for the development of certain technical standards, later reinforced by the National Institute of Standards and Technology through its Artificial Intelligence Framework6.
Section 1(b) of Executive Order 13589 states: “The United States should encourage the development of appropriate technical standards to support the creation of new artificial intelligence-based industries and the adoption of artificial intelligence in existing industries and reduce barriers to the safe testing and application of artificial intelligence technologies”.
This statement clearly reveals the importance the United States places on technological advancement and its determination to prevent any disruption to that progress.
B. Executive Order 13960 (Promoting the Use of Trustworthy Artifical Intelligence in Federal Government)
While Executive Order 13859 outlines a strategic roadmap for strengthening the United States’ global leadership in artificial intelligence, Executive Order 13960 introduces more detailed implementation principles, standards, and accountability mechanisms to ensure the ethical and trustworthy use of AI technologies within federal agencies7.
C. Executive Order 14110 Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence8
Executive Orders 13859 and 13960 were signed by Donald Trump, and the evolution and widespread adoption of language models such as ChatGPT have laid the groundwork for a third executive order on AI in the lead-up to Executive Order 14110, to be signed by Joe Biden. In the absence of policy measures, the development of AI models has raised various concerns among experts, ranging from the potential existential risks posed by advanced AI models in the future to the current-day effects of these technologies in spreading misinformation, enabling discrimination, and undermining national security.
The policy objectives set forth in the presidential executive order focus on increasing competition in the AI sector, safeguarding civil liberties and national security against AI-induced threats, and ensuring that the United States remains competitive on a global scale in the field of AI. Additionally, the order calls for the creation of specialized positions—referred to as “chief AI officers”— within major federal agencies. In short, it lays out the national approach of the United States to AI regulations.
We can clearly see that these executive orders frequently reference competition and innovation in the AI sector, as well as underscore the importance of maintaining the country’s leadership in global AI developments.
D. Other Regulations
The United States tends to develop guidance-based frameworks rather than comprehensive and rigid legislation like the European Union. This enables institutions to establish flexible, goaloriented rules within their own operations.
1. NIST AI Risk Management Framework (“NIST AI RMF”)
NIST AI RMF is a guide developed by the National Institute of Standards and Technology (NIST) and published in 2023. Prepared to ensure that AI applications in the United States are developed in a trustworthy, ethical, and riskoriented manner, this framework serves as a voluntary guide that stakeholders of any sector, application type, or organizational size can utilize.
The NIST AI Risk Management Framework is one of the most significant guides complementing the “less intrusive” approach of U.S. AI regulations. Although it does not carry legal enforceability, it provides a comprehensive roadmap for responsible AI development and management for a wide range of stakeholders—from public institutions to private enterprises. As AI technologies continue to proliferate, applying this framework will become increasingly critical, both in establishing sectoral standards and in maintaining public trust9.
2. Algorithmic Accountability Act of 2023
The Algorithmic Accountability Act of 2023 is a relatively targeted and measured legislative proposal aimed at enhancing transparency, reliability, and accountability in critical decision-making areas where AI and automation are increasingly utilized. Although the bill does not adopt a radical approach, such as creating a new licensing process or a dedicated AI agency, its introduction of measures like the mandatory “impact assessment” and the FTC’s new technological unit is expected to elevate the United States’ prominent “sector-based, light-touch”10 approach to the next level. Considering the multifaceted impacts of AI technologies, this bill represents a significant step toward more systematically addressing issues related to discrimination risks and data quality11.
If enacted in its final form, the bill would provide firms with clear rules and guidelines while also enabling consumers to more easily detect and challenge automation-related issues.
E. General Assessment
Rather than a single comprehensive law, the United States’ AI regulatory framework is built around flexible, sector-focused regulations, voluntary standards, and strategic R&D support. This model aims to manage key risk areas using existing regulatory tools while safeguarding innovation.
IV. TURKEY’S AI REGULATION AND STRATEGY
Although Turkey does not yet have a comprehensive AI regulation, its “National Artificial Intelligence Strategy,” prepared in 2021, stands out as Turkey’s first national strategy document in this field.
The vision of the Strategy is “to create global value through an agile and sustainable artificial intelligence ecosystem for a prosperous Turkey,” and it is shaped by six strategic priorities: “training AI specialists and increasing employment in the field; supporting research, entrepreneurship, and innovation; expanding access to quality data and technical infrastructure; enacting regulations that accelerate socioeconomic integration strengthening international collaborations; and accelerating structural and workforce transformation.” Within the framework of these priorities, various objectives, measures, and targets have been set12.
So far, the provisions concerning AI in Turkey have been integrated into various pieces of legislation, meaning the regulatory approach has been somewhat scattered. Consequently, one possibility for Turkey— while still in the early stages—would be to adopt a comprehensive framework law on AI, leaving more detailed provisions for specific sectors and secondary regulations to be ad dressed later. Under an overarching law that guarantees general principles and fundamental rights, additional regulations tailored to each sector’s unique dynamics could provide a more flexible and effective framework in practice. In this way, stricter rules could be applied to high-risk areas, while start-ups and innovation-focused initiatives could be spared excessive bureaucratic burdens through supportive measures.
Therefore, given Turkey’s existing legal and institutional framework, a model based on a general statute but emphasizing sectoral and secondary regulations could be implemented to produce beneficial outcomes.
V. NOTEWORTHY DEVELOPMENTS IN THE REST OF THE WORLD
A. South Korea
On December 26, 2024, South Korea became the second country to enact a comprehensive artificial intelligence (AI) law, which was largely inspired by the EU AI Act. Known as the “Basic Law for AI Development and Trust-Based Establishment,” this regulation passed through parliament with an overwhelming majority. Similar to the EU AI Act, the legislation categorizes AI systems—such as those with high impact or generative capabilities—and imposes various obligations on businesses to ensure the reliability of high-impact AI systems. The law bears many similarities to the EU AI Act, particularly in its risk-based regulatory approach, emphasis on ethical and trustworthy AI systems, protection of fundamental rights, requirements for transparency, and establishment of oversight mechanisms.
B. Brazil
On December 10, 2024, the Brazilian Senate approved Bill No. 2338/2023, which outlines rules for the development and use of artificial intelligence (AI). The proposed Brazil AI Law aims to set operational guidelines and requirements for AI systems, protect human rights, and enforce criminal sanctions in cases of noncompliance. By adopting a risk-based approach, it imposes stricter rules on high-risk systems that may affect public safety or fundamental rights, while imposing various obligations on providers and operators—particularly additional safety measures for high-risk systems.
The proposed legislation shares similarities with the EU AI Act in its risk-based approach. At the same time, it also follows a principle-based approach, clearly defining fundamental values and principles. Concepts such as human-centeredness, respect for human rights, environmental protection, equality and non-discrimination, technological innovation, privacy, and data protection are included in the draft law. Due to its comprehensive, principle-focused nature, Brazil’s AI bill stands out as a measure that could potentially serve as an international reference.
VI. CONCLUSION
By 2024, artificial intelligence regulations have emerged globally as an area marked by distinct approaches and rapid transformation. With its risk-based, horizontal, and comprehensive framework, the EU wields what is termed the “Brussels effect,” possessing the potential to shape other countries’ regulations. The United States, on the other hand, focuses on preserving innovation and enhancing competitiveness through more sector-specific and flexible frameworks, opting for guides, voluntary standards, and regulations tailored to particular areas instead of a single comprehensive law. In Turkey, although a unified, broad-based AI framework has yet to be introduced, the measures taken within the scope of the “National Artificial Intelligence Strategy” and the integration of relevant legislation suggest the possibility of more systematic regulations in the future.
All of these varied approaches share a common aim: maximizing the societal and economic benefits of AI applications while preventing possible risks and harms. Data protection, the prevention of discrimination, and national security have become top regulatory priorities in nearly every region. In the coming period, both implementing these regulatory frameworks and updating them in line with the rapid pace of technological advancement will become imperative.In conclusion, each country’s choices regarding AI regulations are shaped by its legal system, economic and societal priorities, and global competitiveness goals. Nevertheless, a growing awareness of shared ethical principles and the protection of fundamental rights points to a potential evolution toward a certain level of global regulatory standardization. Considering AI’s future socioeconomic impact, dynamic and inclusive regulations that do not merely restrict technological progress but rather support it in a responsible, human-centered manner will be one of the fundamental cornerstones of global collaboration.
BIBLIOGRAPHY
ALEXANDRA KELLEY, “House AI Task Force Wants to Marry Light Touch Regulations with Sector-Specific Policy.” Nextgov, 13 November 2024, https://www.nextgov.com/artificial-intelligence/2024/11/house-ai-task-force-wants-marry-light-touchregulations-sector-specific-policy/401023/ (Accessed: January 13, 2025).
ANU BRADFORD, The Brussels Effect: How the European Union Rules the World. Oxford University Press, 2020.
MARTIN EBERS, “Standardizing AI – The Case of the European Commission’s Proposal for an Artificial Intelligence Act.” In The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics, edited by Larry A. DiMatteo/ Michel Cannarsa/ Cristina Poncibò, pp. 1–22. Cambridge University Press, 2022.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, laying down harmonised rules on artificial intelligence, [2024] OJ L189/1, Recital 176.
Executive Order No. 13859, 84 FR 3967 (11 February 2019).
NIST. Artificial Intelligence Risk Management Framework. National Institute of Standards and Technology, 2023.
Algorithmic Accountability Act of 2023, S. 2892, 118th Cong. (2023).
Presidential Digital Transformation Office. (2021). National Artificial Intelligence Strategy 2021-2025. Ministry of Industry and Technology. https://cbddo.gov.tr/uyzs
“Maintaining American Leadership in Artificial Intelligence,” 84 Fed. Reg. 8441 (14 February 2019), https://www.federalregister. gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence (Accessed: January 13, 2025).
“Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” 85 Fed. Reg. (8 December 2020), https://www. federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government (Accessed: January 13, 2025).
“Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Federal Register (1 November 2023), https://www. federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence (Accessed: January 13, 2025).
FOOTNOTE
1 Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press 2020).
2 Bradford, The Brussels Effect.
3 Martin Ebers, ‘Standardizing AI – The Case of the European Commission’s Proposal for an Artificial Intelligence Act’ in Larry A DiMatteo, Michel Cannarsa and Cristina Poncibò (eds), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (Cambridge University Press, 2022) p. 1-22.
4 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence [2024] OJ L189/1, p. 176.
5 Executive Order No 13859, 84 FR 3967 (11 February 2019).
6 Maintaining American Leadership in Artificial Intelligence, 84 Fed Reg 8441 (14 February 2019) https:// www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence accessed 13 January 2025.
7 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, 85 Fed. Reg. [page number] (Dec. 8, 2020), https:// www.federalregister.gov/documents/2020/12/08/2020-27065/ promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government (accessed January 13, 2025).
8 ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’, Federal Register (1 November 2023) https://www.federalregister.gov/ documents/2023/11/01/2023-24283/ safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence erişim tarihi 13 January 2025.
9 NIST. (2023). Artificial Intelligence Risk Management Framework. National Institute of Standards and Technology.
10 Alexandra Kelley, “House AI Task Force Wants to Marry Light Touch Regulations with Sector-Specific Policy” (Nextgov, November 13, 2024), https://www.nextgov.com/artificial-intelligence/2024/11/house-ai-taskforce-wants-marry-light-touch-regulations-sector-specific-policy/401023/ (accessed January 13, 2025).
11 Algorithmic Accountability Act of 2023, S. 2892, 118th Cong. (2023).
12 Presidential Digital Transformation Office. (2021). National Artificial Intelligence Strategy 2021-2025. Ministry of Industry and Technology. https:// cbddo.gov.tr/uyzs







