ABSTRACT
This paper examines the European Union’s Artificial Intelligence Act, focusing on its approach to risk management and the classification of AI systems. The paper concludes by emphasizing the importance of ongoing dialogue, adaptation, and global collaboration to ensure responsible and beneficial AI development in the years to come.
I. INTRODUCTION
The European Union has taken a proactive stance in regulating the use of artificial intelligence within its borders. This initiative culminates in the proposed AI Act, which aims to address a wide array of concerns associated with AI deployment, including ethical considerations, security, and accountability in the use of these systems1. The AI Act marks a significant step as it seeks to establish a comprehensive regulatory framework that balances innovation with necessary safeguards, reflecting the EU’s commitment to human-centric AI development and its goal to set a global standard in this rapidly evolving technological landscape.
The AI Act introduces clear definitions for various categories of AI systems, outlining criteria for risk classification, which is crucial for tailoring the regulatory measures to the specific challenges presented by different technologies and applications, promoting a balanced approach that fosters innovation without compromising ethical and safety concerns2. It represents a landmark moment in the global effort to regulate AI technologies. The Act’s comprehensive scope extends across numerous sectors, aiming to establish a harmonized framework for the development and deployment of AI systems within the EU3. Crucially, the Act recognizes the need for a balance between fostering innovation and mitigating potential risks. It is argued that the Act’s focus on establishing ethical guidelines, promoting trustworthiness in AI systems through principles of accountability and human oversight4. Furthermore, the Act acknowledges the diverse implications of AI across different domains, for instance, exploration of its impact on media and journalism5. By establishing clear guidelines and requirements, the AI Act aims to ensure that AI technologies are developed and utilized responsibly, benefiting society while upholding fundamental rights.
II. THE SCOPE AND DEFINITIONS OF THE AI ACT
Article 1 of Regulation 2024/1689 outlines the scope of the Artificial Intelligence Act, emphasizing a balance between fostering innovation and ensuring protection. Specifically, Article 1 highlights the regulation’s objective of establishing a uniform legal framework for the development, market placement, and use of AI systems within the EU, aiming to facilitate the internal market while upholding European values. This commitment to a human-centric approach is further emphasized in Article 1, which stresses the importance of promoting trustworthy AI that ensures a high level of protection for fundamental rights, health, safety, and the environment. By explicitly linking the advancement of AI with the safeguarding of citizens’ rights and well-being, the regulation, as articulated in Article 1, lays the groundwork for a responsible and ethical approach to AI development and implementation within the European Union. The emphasis on a uniform legal framework not only signals a proactive regulatory stance but also addresses the competitive landscape facing European industries, particularly against counterparts in the United States and China, where regulatory approaches are often less stringent and more innovation-focused6. The AI Act’s provisions for categorizing AI systems based on their risk levels are designed to prevent regulatory overload while still ensuring adequate oversight, which is crucial for fostering innovation in a competitive global environment7.
The AI Act defines the scope of artificial intelligence systems in a comprehensive manner, covering a wide range of technologies and techniques, from machine learning to logic-based and statistical methods8. This inclusive definition is critical, as it ensures that various AI applications, whether they are explicitly labeled as artificial intelligence or not, fall under the regulatory umbrella, thus enhancing oversight and accountability in the deployment of AI technologies within the EU9. Moreover, the act adheres to a “risk-based regulatory approach”, which seeks to address the diverse challenges posed by AI while promoting technological innovation and avoiding excessive regulatory burdens on developers and users alike. This approach aims to strike an equilibrium between mitigating potential risks associated with high-risk AI applications and fostering a conducive environment for the growth of new technologies, ultimately positioning the EU as a leader in ethical AI governance on the global stage10.
One of the key aspects of the AI Act is its emphasis on involving stakeholders throughout the development and implementation phases, ensuring that a wide array of voices contribute to shaping AI practices. This collaborative approach not only enhances transparency and trust in AI systems but also addresses the existing gaps in legal language and engineering practice that have historically hindered the seamless integration of human rights principles into AI development11. Furthermore, the proposed regulations aim to establish mechanisms that allow all stakeholders to influence AI development, monitor its performance, and seek redress in instances of harm, which is essential to closing the existing gaps in accountability and oversight that current legal frameworks often overlook. This collaborative approach aligns with the EU’s broader commitment to protecting fundamental rights in the digital age, making it imperative for the European Parliament to address any identified loopholes during the legislative process.
Moreover, the AI Act aims to serve as a transformative legislation that ultimately shifts how both developers and users perceive the risks associated with AI, ensuring that such technologies are not only innovative but also reliable and ethically aligned with societal values. This reinforces the need for a comprehensive and coherent regulatory framework that can serve as a global model for AI governance, as the AI Act seeks to establish a balanced approach that fosters innovation without compromising ethical and safety concerns. In this context, the AI Act’s focus on risk management is particularly vital, as it delineates the responsibilities of both developers and users in mitigating potential harms while promoting a culture of accountability and continuous improvement within the AI ecosystem, thereby positioning the EU as a trailblazer in the global AI governance landscape. The act not only emphasizes risk management but also introduces specific requirements for high-risk AI systems, mandating rigorous assessments before they are deployed, which can serve as a model for other jurisdictions looking to balance innovation with safety measures in AI regulation12. Furthermore, this emphasis on robust risk management is essential in ensuring that AI technologies do not exacerbate existing societal issues or introduce new forms of discrimination, thereby reinforcing the necessity for proportionality and granularity in regulatory measures to effectively address the varied risks posed by AI.
III. RISK MANAGEMENT IN THE ACT
At the core of the AI Act lies the concept of risk management, which is particularly crucial given the wide-ranging implications of AI systems. The Act establishes a structured framework for assessing and mitigating risks associated with AI applications, mandating developers of high-risk AI systems to implement robust risk management processes that include ongoing monitoring and evaluation to ensure compliance with established safety and ethical guidelines13. This requirement is designed not only to enhance the accountability of developers but also to protect users from potential harms that may arise from AI deployment, reflecting a shift toward proactive rather than reactive oversight in the regulation of these technologies. Furthermore, this risk management framework underscores the necessity for clear compliance processes, which are vital for bridging the gap between legal expectations and the technical realities faced by developers in the AI field, thereby promoting a more effective and harmonized regulatory environment across the EU.
The AI Act’s emphasis on risk management is particularly crucial given the rapid pace of technological progress in the AI domain and the inherent complexities involved in the AI lifecycle and data governance processes, where multiple entities are often engaged. This complexity necessitates a comprehensive understanding of the potential risks each AI system poses, while also ensuring that the compliance mechanisms in place can accommodate the dynamic nature of AI technology and the diverse ecosystem of stakeholders involved in its development and deployment14. This necessitates not only accurate documentation and transparency regarding risk assessments at each stage of AI development but also the availability of standards that facilitate compliance and accountability across various applications of AI technology, addressing the pressing need for coherence in regulatory practices as the sector evolves rapidly. In addressing these multifaceted challenges, the AI Act advocates for a rigorous framework that encompasses the identification, documentation, and continuous evaluation of risks associated with AI systems, underscoring the importance of collaborative efforts among stakeholders to foster a culture of accountability and trust in the AI ecosystem15. Furthermore, this framework acknowledges the need for a risk-based approach that is responsive to the unique characteristics of different AI applications, ensuring that high-risk systems are subject to more stringent requirements while still allowing for flexibility in lower-risk scenarios, thereby striking a balance between innovation and protection. This nuanced approach not only aims to protect individuals and society from potential harms but also recognizes the value of enabling innovation in areas where risks are manageable, thus promoting a more inclusive and dynamic environment for AI development and deployment that aligns with broader societal interests and human rights principles. In light of these considerations, it is paramount that the AI Act evolves to incorporate context-specific assessments rather than adhering strictly to broad applications, as the risk magnitude can often be misestimated when evaluating general-purpose AI systems that exhibit varied and unpredictable capabilities; therefore, a more granular and proportional approach may prove essential in realizing the full potential of AI while mitigating its inherent risks16.
Some examples of unacceptable AI-related risks include autonomous weapons systems, such as drones or robots, equipped with facial recognition capabilities that could target individuals based on their identity, posing a significant threat to human life. Government-deployed social scoring systems that determine access to education, healthcare, or employment based on a citizen’s online behavior can lead to discrimination and oppression. Deepfakes used for political manipulation, such as creating fake news or propaganda to influence elections or destabilize governments, can have severe consequences. These examples illustrate the urgent need for stringent regulations and ethical guidelines to ensure that AI systems are developed and deployed in ways that uphold fundamental human rights and societal values, highlighting the potential for misuse and abuse of these technologies. A regulatory framework that prioritizes context-specific risk assessments and proportional safeguards is essential to address these concerns17. High-risk artificial intelligence includes self-driving cars that malfunction due to software errors or environmental factors, leading to accidents and injuries. AI-powered medical diagnostic systems that provide inaccurate diagnoses, resulting in misdiagnosis or inappropriate treatment, can have serious health consequences. Algorithmic hiring systems that discriminate against certain groups of job applicants based on their demographic information or online activity can perpetuate social inequalities. Regulators have responded to growing concerns about these high-risk applications by advocating for a comprehensive understanding of risk profiles and mitigation strategies, aimed at ensuring that such systems are both reliable and equitable. This is crucial in fostering public trust and acceptance of AI technologies as they become increasingly ubiquitous in our daily lives18. Examples of AI applications with limited risks include chatbots that provide basic customer service for simple queries but lack the capability to handle complex or emotionally nuanced requests19. Similarly, recommendation systems that suggest products or services based on past purchases may be less effective than human recommendations, as they may fail to account for individual preferences or contextual factors. Additionally, spam filters that mistakenly flag legitimate emails as spam, leading to missed opportunities or minor inconveniences, represent a minor risk. This distinction between high-risk and limited-risk AI applications underscores the need for tailored regulatory measures that can effectively address the unique challenges posed by each category, ensuring that regulatory efforts are proportionate to the potential impact of the technology on individuals and society20.
To sum up, risk management in AI Act focuses on addressing the multifaceted challenges posed by the integration of AI technologies across various domains. As such, it mandates that organizations involved in the development and deployment of high-risk AI systems implement comprehensive risk management frameworks that encompass not only compliance with regulatory standards but also proactive measures to mitigate potential adverse impacts, thus fostering an environment where ethical considerations are intrinsic to the design and implementation of these systems. This emphasis on proactive risk management aligns with the increasing recognition of AI as a dual-use technology, where the potential benefits must be carefully weighed against the risks it poses to society and individual rights, thus necessitating robust governance mechanisms that can adapt to the evolving landscape of AI innovation21.
IV. CONCLUSION
The European Union’s AI Act represents a significant step forward in the global effort to regulate AI technologies in a manner that promotes innovation, protects fundamental rights, and fosters public trust. This regulatory framework not only sets a precedent for how AI systems should be responsibly designed and deployed but also reflects a commitment to ensuring that advancements in AI do not come at the expense of ethical principles and human rights, illustrating the EU’s intent to position itself as a leader in the field of AI governance. In pursuit of this goal, the EU must carefully navigate the tension between fostering innovation and ensuring robust protections against the potential harms of AI, which calls for ongoing dialogue and adaptation as the technology evolves and its societal implications become increasingly complex.
To achieve this balance, the EU must remain agile in its regulatory approach, adapting to the fast-evolving landscape of AI technologies while engaging with global partners to promote inclusive and effective regulatory practices that can mitigate risks without stifling innovation, ultimately ensuring that the transformative potential of AI is harnessed in a manner that benefits all of society. In this regard, the EU’s regulatory strategy is not only timely but essential, as it endeavors to set a high standard for AI governance that other regions might emulate, thereby influencing global norms and practices in AI regulation, which could lead to a more interconnected and responsible AI ecosystem worldwide. Such an approach may ultimately inspire other jurisdictions to adopt similar frameworks that prioritize safety and ethical considerations, thereby enhancing global collaboration on AI governance and ensuring that the potential benefits of AI are realized without compromising on fundamental rights or ethical values22. Moreover, as the international landscape for AI governance continues to evolve, it is crucial for the EU to engage in meaningful dialogue with other nations, balancing its regulatory ambitions with an awareness of varying commitments to innovation and risk management, thereby fostering a cooperative atmosphere that can facilitate the development of a truly harmonized global framework for AI regulation23.
BIBLIOGRAPHY
BARRETT AM and others, ‘Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks’ (arXiv, 23 February 2023).
BISCONTI P and others, ‘Maximizing Team Synergy in AI-Related Interdisciplinary Groups: An Interdisciplinary-by-Design Iterative Methodology’ (2023) 38 AI & SOCIETY 1443.
BOGUCKI A and others, ‘The AI Act and Emerging EU Digital Acquis’ [2022] Overlaps, gaps and.
CHENG L/ VARSHNEY KR/ LIU H, ‘Socially Responsible AI Algorithms: Issues, Purposes, and Challenges’ (arXiv, 21 August 2021).
CIHON P/ SCHUETT J/ BAUM SD, ‘Corporate Governance of Artificial Intelligence in the Public Interest’ (2021) 12 Information 275.
DIAZ-GRANADOS J, ‘Potential Legal Categories in the Sharing Economy’s Platform Operator-User-Provider Model: A Taxonomic and Positive Approach -Part 2’ (2022) 62 Jurimetrics 241.
GASSER U, ‘An EU Landmark for AI Governance’ (2023) 380 Science 1203.
GOLBIN I and others, ‘Responsible AI: A Primer for the Legal Community’, 2020 IEEE International Conference on Big Data (Big Data) (2020).
GOLPAYEGANI D/ PANDIT HJ/ LEWIS D, ‘AIRO: An Ontology for Representing AI Risks Based on the Proposed EU AI Act and ISO Risk Management Standards’ in ANASTASIA DIMOU and others (eds), Studies on the Semantic Web (IOS Press 2022).
HANIF H and others, ‘Navigating the EU AI Act Maze Using a Decision-Tree Approach’ [2024] ACM Journal on Responsible Computing 3677174.
HELBERGER N/ DIAKOPOULOS N, ‘The European AI Act and How It Matters for Research into AI in Media and Journalism’ (2023) 11 Digital Journalism 1751.
LAUX J/ WACHTER S/ MITTELSTADT B, ‘Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk’ (2024) 18 Regulation & Governance 3.
MULEY A and others, ‘Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework’ (2023) 21 Asian Journal of Medicine and Health 276.
PARK S, ‘Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework’ (arXiv, 15 July 2024).
PETKOVIC D, ‘It Is Not “Accuracy vs. Explainability” -- We Need Both for Trustworthy AI Systems’ (arXiv, 23 December 2022).
SALGADO-CRIADO J/ FERNÁNDEZ-ALLER C, ‘A Wide Human-Rights Approach to Artificial Intelligence Regulation in Europe’ (2021) 40 IEEE Technology and Society Magazine 55.
SCHUETT J, ‘Risk Management in the Artificial Intelligence Act’ [2023] European Journal of Risk Regulation 1.
SHI Q, ‘The Empirical Analysis of the EU Artificial Intelligence Act’ (2023) 4 Frontiers in Computing and Intelligent Systems 109.
TARTARO A/ SMITH AL/ SHAW P, ‘Assessing the Impact of Regulations and Standards on Innovation in the Field of AI’ (arXiv, 8 February 2023).
WACHTER S, ‘Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond’ (2024) 26 Yale Journal of Law and Technology accessed 22 August 2024.
YEE D-H/ YOU Y-Y, ‘The Impact of Awareness of New Artificial Intelligence Technologies on Policy Governance on Risk’ (2020) 11 Research in World Economy 152.
ZHAO J/ GÓMEZ FARIÑAS B, ‘Artificial Intelligence and Sustainable Decisions’ (2023) 24 European Business Organization Law Review 1.
ZOWGHI D/ DA RIMINI F, ‘Diversity and Inclusion in Artificial Intelligence’ (22 May 2023).
FOOTNOTE
1 Peter Cihon/ Jonas Schuett/ Seth D Baum, ‘Corporate Governance of Artificial Intelligence in the Public Interest’ (2021) 12 Information 275.
2 Quan Shi, ‘The Empirical Analysis of the EU Artificial Intelligence Act’ (2023) 4 Frontiers in Computing and Intelligent Systems 109; Jonas Schuett, ‘Risk Management in the Artificial Intelligence Act’ [2023] European Journal of Risk Regulation 1; Sangchul Park, ‘Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework’ (arXiv, 15 July 2024).
3 Hilmy Hanif and others, ‘Navigating the EU AI Act Maze Using a Decision-Tree Approach’ [2024] ACM Journal on Responsible Computing 3677174; Artur Bogucki and others, ‘The AI Act and Emerging EU Digital Acquis’ [2022] Overlaps, gaps and.
4 Johann Laux/ Sandra Wachter/ Brent Mittelstadt, ‘Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and Acceptability of Risk’ (2024) 18 Regulation & Governance 3; Wachter, ‘Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond’ (2024) 26 Yale Journal of Law and Technology accessed 22 August 2024.
5 Natali Helberger/ Nicholas Diakopoulos, ‘The European AI Act and How It Matters for Research into AI in Media and Journalism’ (2023) 11 Digital Journalism 1751.
6 Shi (n 2).
7 ibid; Alessio Tartaro/ Adam Leon Smith/ Patricia Shaw, ‘Assessing the Impact of Regulations and Standards on Innovation in the Field of AI’ (arXiv, 8 February 2023).
8 Park (n 2).
9 Shi (n 2).
10 Schuett (n 2).
11 Jesús Salgado-Criado/ Celia Fernández-Aller, ‘A Wide Human-Rights Approach to Artificial Intelligence Regulation in Europe’ (2021) 40 IEEE Technology and Society Magazine 55.
12 Juan Diaz-Granados, ‘Potential Legal Categories in the Sharing Economy’s Platform Operator-User-Provider Model: A Taxonomic and Positive Approach -Part 2’ (2022) 62 Jurimetrics 241.
13 Schuett (n 2).
14 ibid.
15 Diaz-Granados (n 12); Salgado-Criado/ Fernández-Aller (n 11); Delaram Golpayegani/ Harshvardhan J Pandit/ Dave Lewis, ‘AIRO: An Ontology for Representing AI Risks Based on the Proposed EU AI Act and ISO Risk Management Standards’ in Anastasia Dimou and others (eds), Studies on the Semantic Web (IOS Press 2022).
16 Jingchen Zhao/ Beatriz Gómez Fariñas, ‘Artificial Intelligence and Sustainable Decisions’ (2023) 24 European Business Organization Law Review 1.
17 Ilana Golbin and others, ‘Responsible AI: A Primer for the Legal Community’, 2020 IEEE International Conference on Big Data (Big Data) (2020); Park (n 2); Anthony M Barrett and others, ‘Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks’ (arXiv, 23 February 2023); Didar Zowghi/ Francesca da Rimini, ‘Diversity and Inclusion in Artificial Intelligence’ (22 May 2023).
18 Golbin and others (n 17); D Petkovic, ‘It Is Not “Accuracy vs. Explainability” We Need Both for Trustworthy AI Systems’ (arXiv, 23 December 2022); Barrett and others (n 17); Lu Cheng/ Kush R Varshney/ Huan Liu, ‘Socially Responsible AI Algorithms: Issues, Purposes, and Challenges’ (arXiv, 21 August 2021).
19 Schuett (n 2).
20 Apoorva Muley and others, ‘Risk of AI in Healthcare: A Comprehensive Literature Review and Study Framework’ (2023)
21 Asian Journal of Medicine and Health 276; Petkovic (n 18). 21 Golpayegani/ Pandit/ Lewis (n 15); Golbin and others (n 17); Do-Hyung Yee/ Yen-Yoo You, ‘The Impact of Awareness of New Artificial Intelligence Technologies on Policy Governance on Risk’ (2020) 11 Research in World Economy 152
22 Urs Gasser, ‘An EU Landmark for AI Governance’ (2023) 380 Science 1203; Zhao/ Gómez Fariñas (n 16).
23 Piercosma Bisconti and others, ‘Maximizing Team Synergy in AI-Related Interdisciplinary Groups: An Interdisciplinary-by-Design Iterative Methodology’ (2023) 38 AI & SOCIETY 1443







