The EU AI Act: Objectives, Structure, and Key Provisions

Introduction

The train of the European Union’s Artificial Intelligence Act (AI Act) is about to reach its final station in the stage of approval and publishing. The journey of this Act from the preparation of its preliminary studies and the European Commission’s proposal of its first draft to today has taken more than five years. It is still not expected to come into force before 2025.

The lengthy and intricate process of preparing the Act and its provisions may contribute to the sense of great accomplishment upon its successful completion. However, it also reinforces some concerns. There are concerns about whether the slow and meandering legislative process can keep pace with the rapidly evolving field of AI, which makes significant advancements and breaks new ground daily in various economic, social, and political spaces.

An EU act, whose provisions will be applicable within all the Union’s member states, can’t be considered a mere domestic legislation. The size and importance of the EU’s unified market make the rules in force in this market a very important factor. These rules interfere with the development and production of AI technology applications. The producers of these applications worldwide will seek to comply with the requirements to keep their shares in this market, enlarge them, or enter it if they don’t already have a share in it.

This impact on the evolution of AI technology products expands to what is marketed everywhere in the world due to their cross-border nature. The EU is also party to commercial partnerships with most of the world’s countries; thus, these countries are keen on adapting their legal frameworks with those of the EU.

The significance of the EU AI Act lies in its pioneering approach to establishing a regulatory framework for AI. This landmark legislation marks one of the first attempts to regulate this critical field, which holds immense influence on our present and future. The AI Act presents a valuable opportunity for learning, both from its successes and challenges.

This paper discusses the EU AI Act’s objectives, structure, and most important characteristics. It also deals with the general and special requirements applicable to AI systems. Finally, the paper sheds light on the Act’s most important provisions in addition to the institutions it establishes.

Note: This paper depends on the Act’s final draft as its main source. It’s not expected that any significant modifications will be made to this draft, as the next stage (preparing the Act for publishing) is limited to making technical adjustments to improve the text and make it clearer. The article numbers referred to in the paper are as per the final draft and are for reference if needed.


The Act’s Objectives

The EU’s institutions involved in preparing and drafting the AI Act have clarified several key objectives that the Union seeks to achieve through this legislation. Among the most important sources in this regard are the White Paper issued by the European Commission in February 2020, and the executive briefing published by the European Parliament in June 2023 after its approval of the draft law.

The paper discusses the key objectives below, noting that it will return to them in subsequent sections and provide more details depending on the context.

Protecting Basic Rights and Safety Guarantees

The Act seeks to ensure that AI technology products comply with basic principles, including freedom from discrimination, limiting bias, protecting the right to privacy, and avoiding injustice. To achieve this, the Act includes provisions related to data governance, transparency, and limiting bias, among the requirements specified for high-risk AI systems. The Act also includes a prohibition of a number of AI systems uses (Articles 5 and 11) to ensure such systems are not abused for discrimination purposes or the manipulation of users’ decisions.

Promoting Trust and Ethical Use

The Act seeks to raise public trust in AI technology and promote responsible development and deployment of its applications. It includes many guarantees and establishes mechanisms for transparency (Article 53), human supervision (Article 14), and accountability through a complaint mechanism (Article 27). Conversely, there are criticisms that emphasize the necessity for stricter constraints on data collection and usage. Additionally, concerns have been raised regarding the ambiguity of the Act’s definition of high-risk systems.

Encouraging Innovation and Reinforcing Competitiveness

The Act seeks to balance regulatory frameworks and the development of a friendly environment for AI technology in Europe. It adopts a high-risk assessment approach (Article 4) and exempts low-risk AI systems from the main constraints.

The Act also sets mechanisms for establishing what it calls “sandboxes,” which are controlled environments for testing AI systems before deploying them. This seeks to promote innovations and support small businesses and start-ups willing to develop AI systems with the requirements set by the Act (Article 53). 

On the other hand, concerns arise regarding the effects of bureaucratic barriers and the ambiguity of some provisions on small businesses, as well as the suppression of rapid AI technology advancements.

Play a Leading Global Role

Through the AI Act, the EU seeks to take precedence in establishing the regulatory rules for the responsible development of AI technology (law preamble). Accordingly, it aims to impact any international standards that may be set later. Hence, the Union’s legislating institutions sought to ensure that the Act is as comprehensive as possible.

The success of these institutions in approving the Act to be issued as one of the first AI regulations worldwide would secure many advantages for Europe due to this precedence. However, the effectiveness of the impact on shaping international standards depends on how successful both the application and enforcement of the Act are going to prove within the EU itself.


The Act’s Structure and Important Characteristics

Risk-Based Approach

The risk-based approach is the core characteristic of the EU AI Act. This approach’s main purpose is to balance achieving the different objectives of the act that might contradict each other.

The Act aspires to balance protection from potential risks of AI systems abuse with encouraging and supporting innovation and fast evolution of these systems. It also seeks to balance different (even conflicting) interests of AI systems developers, their users, society, and member states’ national security. To achieve this, the Act categorizes AI systems according to their potential risks and sets the different requirements of each level.

Categories and Risk Levels

Article 8 of the Act defines risks as “the potentiality of an AI system to damage individuals’ safety, health, basic rights, or welfare.”

Article 5 defines risk levels as:

  • Unacceptable risks: Systems that prove to be dangerous by nature without a possible limitation of their dangers. (e.g., government social score systems.)
  • High-risk: AI systems that pose great threats despite the ameliorating procedures. (e.g., face recognition system used for law enforcement.)
  • Low-risk: Systems with limited or low threats to individuals. (e.g., chat robots for customer services.)
  • Minimum-risk: AI systems with low risks whose potential threats can be ignored and need nothing more than general transparency requirements (e.g., games and music recommendation algorithms).

Different Requirements for Each Risk Level

Article 47 of the Act states that only high-risk AI systems are subject to special requirements beyond the general ones concerned with transparency. It sets in its provisions the special requirements of high-risk systems, including data governance requirements (Article 10), human supervision (Article 14), risk management (Article 15), transparency (Article 53), compliance assessment (Article 44), and post-marketing monitoring (Article 45). The Act also allows (Article 29) member states to choose to impose additional domestic requirements for high-risk AI systems.

The Advantages of Risk-Based Approach

First, this approach is proportional. It adapts regulatory frameworks to the actual risks of different AI systems, avoiding unnecessary burdens for low-risk applications. This can encourage responsible innovation by facilitating the development and deployment of low-risk AI systems. It also allows more focus on high-risk systems, directing more resources and regulation efforts to aspects that may cause more damage.

Challenges and Necessary Considerations

The risk-based approach requires clarity in definitions to ensure consistent interpretations of what constitutes high risks across EU member states. This helps avoid gaps in implementation that may lead to unjust competition conditions. It is also important that the act be adaptable to deal with new and developing risks of AI systems. Finally, this approach requires keeping a precise balance between protecting individuals and promoting and supporting innovation. This is a very sensitive task and poses a lasting challenge to the Act’s effectiveness.

General Obligations

The AI Act sets a number of general obligations applicable to all AI systems regardless of their risk level. These obligations seek to ensure responsible development, deployment, and use of AI. The obligations include the following:

  • Transparency (Article 4a): All AI systems should be developed and used in a manner that allows their users to understand their purpose and risks. This includes information about AI systems’ capabilities, limits, and their decision-making procedures.
  • Data protection (Articles 8 & 9): Data processing for use in AI systems should abide by the established principles of data protection, such as accuracy, proportionality, fairness, and minimalism. This guarantees responsible practices for data collection, use, and storage.
  • Indiscrimination (Article 11): The Act prohibits the development, marketing, or use of AI systems that discriminate against individuals based on their protected characteristics (like religious, political, or intellectual beliefs, race, sexual orientation, etc.). This seeks to protect individuals from biased or unjust algorithm outcomes.
  • Safety (Article 17): All AI systems should be designed and developed to ensure the safety of people, considerations of their intended uses, and potential harms. This includes risk limiting and damage prevention procedures.
  • Prohibition of Unacceptable Risks (Article 5): The Act prohibits the development, marketing, and use of AI systems that are considered dangerous in themselves and whose risks cannot be acceptably limited.
  • Complaints and Access to Information Mechanisms (Articles 26 & 27): Individuals have the right to file complaints to authorities concerning compliance with the act and request information about high-risk AI systems that affect them.
  • Security and Confidentiality (Article 60): AI systems developers and producers should take appropriate measures to ensure the security and confidentiality of data used in these systems.
  • Traceability (Article 64): AI systems should be designed and developed to allow the traceability of their decisions and data processing activities.

High-Risk AI Systems Special Obligations

In addition to the general obligations, the AI Act sets special requirements and obligations for high-risk AI systems. These obligations aim to address the potential risks of these systems and ensure their responsible development, deployment, and use.

Below are the most important obligations. Some of them are related to the Act’s general characteristics, like data governance, transparency, and human supervision.

Data Governance Requirements

Article 10 of the AI Act requires data used for training, revision, and testing of high-risk AI systems to be relevant to their purpose, representative enough, have no errors, and be complete. The Act also requires taking measures to reveal, prevent, and limit potential bias. Bias is defined as what may impact users based on their characteristics. (Article 10-f)

The Act also requires data collection and processing to be limited to what is necessary and proportionate to the intended purpose of the concerned AI system. (Article 12) It also requires transparency of data sources, especially for personal data, considering the original purpose of data collection. (Article 10-aa)

Human Supervision

Article 14 requires high-risk AI systems to be designed and developed to facilitate effective human supervision. This includes making available tools for monitoring and intervention for human-machine communication. The Act also requires that human supervisors be capable of understanding AI systems’ capabilities and limits, monitoring their processes, intervening when needed, and being accountable for their decisions related to these systems.

Risk Management

Article 15 requires setting procedures for determining, assessing, and limiting potential risks throughout a high-risk AI system’s life cycle, including development, deployment, and operation. The Article also requires high-risk AI systems service providers to inform competent authorities of dangerous incidents and operation issues.

Transparency

Article 53 requires high-risk AI systems to provide more detailed information about their operational logic, decision-making processes, and potential risks while considering technical limitations and potential abuse. The Act also requires high-risk AI systems service providers to prepare and publish transparency reports outlining the purpose of the concerned system, its intended use, potential risks, precautions taken, and performance assessment.

Other High-Risk Systems Obligations

Article 44 requires high-risk AI systems to undergo a compliance process by competent authorities to ensure compliance with the AI Act obligations. Article 45 also requires high-risk AI systems service providers to monitor their performance after deployment and to inform the authorities of any incidents or issues. Finally, Article 49 requires developers of high-risk AI systems to prepare technical documentation of the concerned system design, development, and operation. This documentation must also be updated and made available to the authorities when needed.


The Act’s Main Provisions

The AI Act includes many provisions covering various aspects of AI development, deployment, and use processes. These provisions may be categorized according to a set of important issues that the Act focuses on, including transparency requirements, human supervision, data governance, damage compensation mechanisms, and sandboxes.

Transparency Requirements

Transparency is a special priority when setting up regulatory frameworks for AI systems. The absence of transparency represents one of the main issues of systems available for public use today.

The most prominent example is the increasing opacity of large language models developed by OpenAI, on which ChatGPT is based. Since the company released the fourth generation of these models (GPT-4), it has declined to publish any detailed technical information concerning the model’s capabilities, how it works, and its potential use risks.

This coincides with repeated calls from experts for the need for developers of such systems to adhere to the principle of transparency. Therefore, the lawmakers’ focus on imposing transparency requirements on AI systems is one of the main positive aspects of this act. In the following, the paper reviews how the provisions of the act enforce the principle of transparency.

General Transparency Requirements

The Act imposes a set of general transparency requirements that all AI systems should comply with, regardless of their risk level. Article 4-a counts transparency among its general rules. It states that AI systems should be developed and used in a manner that allows their users to understand their intended purpose and potential risks.

The Act also (Article 33) requires AI systems developers to provide easily accessible and understandable information about the purposes of these systems, their intended use, and potential risks for their users. It requires (Article 54) the provision of clear and understandable information about the use of AI systems when users interact with applications that include such systems, explaining how far they intervene in the operations of these applications and the purpose of their use within them.

High-Risk Special Transparency Requirements

The Act imposes additional transparency requirements for high-risk AI systems. Article 13 requires such systems to be designed and developed to ensure their work is satisfactorily transparent so that both their service providers and users can understand their functions.

The Act (Article 53) requires high-risk AI systems to provide additional detailed information about their decision-making logic. Article 55 requires transparency reports for high-risk AI systems that explain the system’s purpose, intended use, risks, procedures for limiting them, and performance assessment criteria.

Article 10 of the Act prohibits intentional manipulation using AI systems to exploit users’ weaknesses or deceive them. The Act also (Article 52) limits the use of deepfake to create or use fake content without protective procedures and transparency. Finally, Article 83 requires facilitating access to transparency reports published in compliance with the Act, ensuring that they are understandable and adapted to their targeted audience.

Human Supervision

Development and use of AI systems based on deep learning have accelerated. These technologies use programmatic structures like artificial neural networks, which are systems that modify algorithms used within them independently based on their own assessment of how close they are to achieving their set objectives.

This means that once these systems are operational, the course of their operations develops in a way that is not decided by their programmers and is sometimes not even predictable by them. In many cases large language models have shown that they acquired unpredictable and non-proportional capabilities and characteristics. This led to concerns that AI systems may develop characteristics that endanger their users while in operation. It also led many experts to demand effective human supervision of these systems.

The focus given to this point by the AI Act legislators is one of its most important features and points of strength. The Act emphasizes human supervision of high-risk AI systems to ensure human intervention in important decision-making processes and to limit potential risks. Below are the Act’s provisions related to human supervision.

  • Focus: The Act (Article 14) requires human supervision of high-risk AI systems to prevent and limit their users’ health, security, and basic rights risks. The Act doesn’t require human supervision of medium- or low-risk AI systems.
  • Design and Development: Article 14-1 requires the design and development of high-risk AI systems to facilitate effective human supervision of them. This includes the provision of tools and user interfaces for monitoring and intervention when needed.
  • Procedures: Article 14-3 requires human supervision to be achievable through several procedures that include the options embedded within the system itself. This means that system developers should ensure human intervention capabilities, such as functions to suspend or shut down the system and audit logs, before deploying the system. Also, AI system service providers may determine proper supervision procedures that users may perform, including training supervisors and preparing risk management protocols.
  • AI Systems Supervisors Responsibilities: Article 4-14 requires supervisors of AI systems to understand the system’s capabilities and limits, monitor system operations and intervene when needed, access data and information pertaining to system decisions, and be accountable for their decisions related to the AI system.

In addition to the above, Article 19 of the Act prohibits marketing or deploying high-risk AI systems unless they can be supervised by humans. Article 50 requires high-risk AI systems to keep records of decisions and procedures made by the system and its human supervisor to facilitate tracking such decisions and procedures and hold those responsible for them accountable.

Data Governance

The relationship between AI systems and data collection, processing, and storage processes, especially personal data, is a widely expansive debate. Thus, data governance is considered one of the major requirements that any AI technology regulatory framework should seek to cover. The AI Act includes many provisions related to data governance, which the paper reviews in the following.

  • General Data Governance Principles: Article 8 sets general principles for data processing, including accuracy, fairness, propriety, and minimization. Article 9 requires that data governance practices be proportional and suitable for the concerned AI system risk level.
  • Data Content and Representation: Article 10 requires datasets used for training, reviewing, and testing high-risk AI systems to be relevant, representative of variations in their domain, as free of errors as possible, and complete. Article 10-f requires examining datasets for any potential bias that may negatively impact users based on their protected characteristics. Finally, article 10-fa requires the implementation of proper measures for uncovering, preventing, and limiting biases found in datasets.
  • Data Collection and Processing: Article 10-a requires proper design choices concerning data collection to comply with legal frameworks and ethical principles. Article 10-aa requires transparency concerning data sources, especially personal data, and taking the original purpose of data collection into consideration. Finally, Article 10-c highlights the necessity of documenting data preparation processes, like labeling, cleaning, and categorical aggregation for proper use.
  • Data Minimization: Article 12 limits data collection and processing to what is necessary and proportionate to the intended use of AI systems. Also, Article 59 states that individuals have the right to access data used in high-risk AI systems that affect them and information about their processing and decision-making logic.

Additionally, Article 16 of the AI Act prohibits high-risk AI systems from using personal data for purposes other than what it was collected for, except when based on a proper legal reason. Article 60 requires AI systems service providers to apply technical and organizational procedures to ensure the security and integrity of the data used in high-risk AI systems. The Act (Article 61) also establishes time limits for keeping data. Data should not be kept longer than necessary for its purpose, taking all legal obligations and potential risks into consideration.

Damage Compensation Mechanisms

The AI Act doesn’t establish special mechanisms for compensation for damages caused by AI systems to individuals. However, the Act includes provisions paving the way to using in-place legal frameworks for this purpose and encourages developing specialized mechanisms in the future. The provisions include the following:

  • Existing legal frameworks: Article 81 of the Act explains that EU regulations and its member states’ domestic laws related to criminal liability, product safety, and consumer protection are still enforceable and applicable to AI systems. This allows individuals to use these legal paths to seek compensation for damages. The Act emphasizes (Article 82) the importance of ensuring access to justice for individuals who are negatively impacted by high-risk AI systems. It encourages EU member states to facilitate effective and prompt legal responses to individuals’ endeavors to be compensated for damages incurred through such negative effects.
  • Right to get information and to complain: Article 26 of the Act recognizes individuals’ right to be informed in case of exposure to high-risk AI systems. Individuals should be enabled to understand the potential impact of these systems on them and to seek information on both the systems and their effects. Article 27 also recognizes individuals’ right to file complaints to competent supervising authorities in their countries concerning alleged non-compliance with the Act by AI systems, which may lead to investigating such allegations and enforcing the Act as necessary.
  • Future developments: Article 80 calls on the European Commission to study the need for and possibility of establishing specialized collective mechanisms for the Union’s members to compensate individuals negatively impacted by AI systems. The Act gives the commission three years from its coming into force to conduct this study.

These provisions are insufficient to cover the Act’s failure to provide independent and specialized compensation mechanisms for damage related to AI systems. Dependence on existing legal frameworks is not enough, as they can’t deal with the complexities of issues related to AI systems.

One reason for the need for AI-specialized regulation is the incompetence of existing legal paths and their inability to deal with relevant cases. The Act should’ve at least added more explanation and directions for how existing legal paths and frameworks may deal with issues related to these products.

Delaying the decision to establish specialized mechanisms for compensating damages related to high-risk AI systems for three years leaves many gaps in the system the Act is supposed to establish, especially considering how quickly AI technology is evolving.

Additionally, applying mechanisms for compensating damages resulting from AI systems will depend on member states’ interpretation, each on its own and according to its laws. In the absence of any guidance from the Act in this regard, unifying or harmonizing these mechanisms in the future will be more difficult. Also, individuals’ access to justice concerning damages they incur, especially across borders, might be a difficult challenge.

Sandboxes

Sandboxes depend on establishing suitable environments for the test operation of AI systems, allowing the assessment of their compliance with the requirements of the AI Act. This allows developers of such systems to test them practically before marketing or deploying them. Also, making these environments available at low costs for small and medium-sized businesses encourages innovation and the evolution of AI technology.

It also allows more fair competition in the face of large businesses that have the required resources to build similar testing environments. The AI Act sets several provisions for establishing such sandboxes, as follows:

  • Establishing sandboxes: Article 53-1 obliges each member state to establish at least one sandbox within its borders. The Act gives a compliance period of two years from its entry into force. Article 53-2 allows more than one member state to establish cross-border sandboxes among themselves. The Act also allows (Article 53-3) the European Commission to provide technical support, guidance, and means to facilitate establishing and operating sandboxes.
  • Purpose and participation: Article 53-4 of the Act states that the purpose of sandboxes is to promote and encourage innovation by dealing with elements not trusted for compliance with the Act’s requirements that AI systems developers may encounter. The Act (Article 53-5) requires that projects that reveal aspects not trusted for compliance with the requirements and projects that contribute to gaining organizational knowledge based on tangible evidence should be prioritized for participation in sandboxes.
  • Supervision and guidance: Article 53-1e of the Act requires sandboxes to provide supervision and support to participants throughout the whole testing process, including stages of development, testing, and pre-marketing. Article 53-1f requires authorities to provide guidance about regulatory expectations and how to comply with the Act’s requirements to participants in sandboxes. Article 53-7 also requires sandboxes to ensure the implementation of measures and procedures for protecting and securing data during testing and experimenting.
  • Reports and assessment: Article 53-8 obliges member states to provide periodical reports to the European Commission about their sandbox activities, results, and lessons learned. Article 53-9 also requires the European Commission to perform assessments based on member states’ reports to ensure sandboxes are effective and to recommend potential improvements in their operations.

Institutions Established by the AI Act

European AI Board

The AI Act establishes a new body named the European Artificial Intelligence Board. It is supposed to play a crucial role in supporting the Act’s enforcement and directing its future evolution. The following are the main provisions related to the formation, competencies, and functions of the AI Board.

Formation and Responsibilities

Article 56 of the Act states that the AI Board is formed of representatives of each of the Union’s member states and representatives of the European Commission. The Board also includes observers from different stakeholders, including the AI industry, academics, and civil society. The Act (Article 56-2) determines the Board’s responsibilities, which are to aid the European Commission in various tasks, including:

  • Providing recommendations and expertise about AI Act enforcement (explaining the Act’s requirements, issuing guides, and dealing with emergent issues).
  • Developing best practices and guiding recommendations for member states to ensure consistent and effective enforcement of the Act.
  • Monitoring and assessment of the Act’s effectiveness and suggesting reviews when necessary.
  • Facilitating the involvement of stakeholders to ensure getting various views to enrich and guide decision and policy-making processes.

Specific Board’s Functions

The Act (Article 56-3) establishes two permanent sub-groups within the Board:

  • Market monitoring group: This group consists of the market monitoring bodies in member states and allows cooperation through information exchange and coordination of law enforcement procedures.
  • AI systems compliance assessment group: This group consists of representatives of these bodies in member states.

Board’s Operations and Transparency Guarantees

Article 56-4 determines the Board’s structure and operations, including cycling the Board’s chair among member states with the help of the European Commission. The Act (Article 56-5) confirms the importance of objectivity, neutrality, and confidentiality in the Board’s activities. It also (Article 56-6) requires the Commission to call the Board to meetings and prepare their agendas based on the Board’s tasks. Finally, Article 56-7 requires the Commission to provide administrative and analytic support to the Board.

AI Act Enforcement Monitoring Bodies in Member States

The EU AI Act obliges the Union’s member states to each establish a specific body to supervise the Act’s enforcement within their borders. This ensures effective enforcement and tackles potential non-compliance cases. The paper reviews below the provisions related to establishing these bodies and their responsibilities and power.

Establishment and Responsibilities

Article (57-1) of the Act requires each state to designate at least one body to be responsible for supervising the application and enforcement of the AI Act. Article 57-2 states that such a body may designate additional bodies to help in specific law enforcement tasks, like market monitoring or technical investigations. Article 57-3 specify the responsibilities of these bodies, which are:

  • Investigating cases of potential non-compliance with the Act, whether arising through complaints or indicators of violations.
  • Law enforcement procedures, including issuing disciplinary measures, applying fines, or applying penalties to non-compliant entities.
  • Monitoring high-risk AI systems market to ensure their compliance with Act requirements.
  • Cooperation with other law enforcement agencies through exchanging information and cooperating in investigations and law enforcement cross-border procedures among member states.

Independence and Power

Article 57-4 requires AI Act enforcement bodies in member states to be independent of the industry and other stakeholders. This helps ensure the objectivity of investigations carried out and decisions taken by them. Article 57-5 provides these bodies with investigative powers, including requests for information and performing inspection, search, and seizure processes. The Act (Article 57-6) allows these bodies to apply administrative penalties, including fines and suspension of marketing non-compliant AI systems.

Transparency and Accountability

Article 57-7 requires member states to ensure transparency of the AI Act’s enforcement procedures and inform the public and stakeholders about ongoing investigations. Article 57-8 requires supervising bodies to be accountable for their law enforcement activities in front of their respective member states and the European Commission. Finally, Article 57-9 requires the Commission to prepare annual reports to assess the effectiveness of law enforcement across member states and reveal cases of potential inconsistency and aspects that can be improved.


Conclusion

This paper has sought to provide a reading of the European Union’s AI Act. Through this reading, the paper reviewed the Act’s objective, most important characteristics and provisions, and the monitoring and executive institutions it establishes.

The EU AI Act has an advantageous place as one of the world’s first regulatory frameworks of AI. Passing the law has taken a long time and faced great challenges. At the forefront of these challenges is dealing with some member states’ conflicting interests and the pressures Big Tech companies exercised in the AI industry. However, the Act was eventually born in a largely solid and consistent form.

The Act is distinguished by strong transparency and ethics rules for developing and marketing AI products. It is also distinguishable for keeping up with the fast pace of language models and generative AI evolution witnessed by the industry in the last few years. Among the Act’s advantages is its adoption of a risk-based approach, which allows it to strike an acceptable balance between protection and promoting innovation and evolution. The Act directs great care to basic rights and freedoms and has considerable flexibility that helps openness to innovation.

On the other hand, the Act has many complex formulations and requirements. This might lead to difficulties in complying with it, especially for small businesses and start-ups. Also, some definitions lack clarity. Most important among them is the definition of high-risk AI systems. Enforcement of many important provisions depends on this definition.

It is, however, too early to judge the Act’s effectiveness and success as a first experiment in regulatory frameworks for AI technology. Most of the Act’s provisions will not be assessed for effectiveness and positive or negative effects before a period of actual enforcement on the ground begins, which will not begin before 2025.