Regulating AI: Approaches to Ensuring Safe Use of the Technology

Introduction

The year 2023 witnessed the rise of AI technology, propelling it to the forefront of global priorities. This extraordinary interest was sparked by a pivotal event: the launch of ChatGPT, a chatbot application developed by OpenAI. The company made ChatGPT freely accessible to the public via the Internet, reaching a vast majority of countries worldwide.

Interesting AI applications have emerged throughout the preceding years, drawing varying public attention. Generative AI and Large Language Model applications, which big tech companies competed to develop and make available to the public on a large scale, represented the beginning of a new stage in the history of AI use. This stage carries many promises and ambitions, as well as fears, concerns, and threats.

Similar to other domains that are critical for public and private concerns, the rapid advancement of AI technology and industry has sparked intense debates regarding the need for regulatory frameworks. This discussion, reminiscent of the debates surrounding the regulation of internet technology, which also transcends national borders, is multifaceted and involves a spectrum of viewpoints. Some advocate for stringent regulations, while others oppose any external regulatory frameworks, believing that they should be developed solely by the entities directly involved in developing AI technology.

Regardless of the different views about regulating AI technology and industry, different forms of such regulation have already emerged. The most prominent is the European Union’s AI Law, which was developed in 2021, officially adopted in March 2024, and is expected to be issued soon. While emerging and implementing regulations for AI technology have already begun, their coming of age is not likely to take place anytime soon. Thus, the AI technology regulation scene is expected to become more complex over time.

This paper seeks to provide a simplified picture of the complex scene of the efforts to regulate AI technology and industry. It poses the question of whether regulating AI is a necessity and asks questions about who should be responsible for setting the regulatory frameworks for AI and what aspects such regulatory frameworks should address. The paper also discusses the potential effects of regulating AI on human rights. Finally, the paper tries to come up with some indices for assessing the regulatory frameworks of AI technology and industry.

Is Regulating AI a necessity?

Reasons for Regulation

There is a wide range of arguments supporting the necessity of setting regulatory frameworks for AI technology and industry. Those calling for these frameworks to be set belong to all stakeholder groups, including AI industry leaders, legislators, government officials, politicians, specialized academicians, and representatives of civil society. All of them discuss the need for regulations with different approaches and views, but there are some main reasons that are emphasized repeatedly, which are discussed below:

  • Importance: AI technology is essential today and will undoubtedly be even more important in the future. Many of those calling for regulating AI emphasizes that this technology is too important and crucial to be left without regulation.
  • Broad and deep impact: AI technology’s current and future expected impacts are so wide-ranging that they include every individual, organization, and entity in our world. They are also so deep that they touch every aspect of individuals’ daily lives and the details of every organization’s work.
  • Dangers and threats: As important AI technology is and as widely and deeply expansive are its current and future impacts on humans’ lives, any potential dangers or threats to the development of this technology are necessarily quite huge. Ignoring such threats is not an option; setting obligatory regulation frameworks is the only way to have some guarantees for dealing with threats of such magnitude.
  • Opportunities and potentials: There are positive effects that can be expected in both the debates about regulating AI technology and the actual efforts and initiatives seeking to set regulations for them. These effects may open the door for several opportunities and potentials, such as raising awareness of AI technology among non-specialists, including decision-makers and politicians, as well as the public. They may also contribute to orienting research and development ethics to address the more critical concerns while promoting the importance and growth of this field within the industry. Finally, the debates about AI regulation encourage self-regulation initiatives, where industry representatives seek to set their own regulatory frameworks before some are forced on them. Hence, they offer serious initiatives and provide satisfactory guarantees for their enforcement.

Regulation Concerns

While potential dangers and abuse concerns push many to call for speedy regulation of AI technology, many also argue against it. Below are some of these arguments.

  • Obstructing innovation and progress: Over-regulation can impose extra unnecessary burdens and slow down industry and research operations, thus obstructing innovation in the AI field and causing valuable applications to be delayed. The rate at which AI evolves makes it challenging to set adaptable regulatory frameworks that can resist expiry. Stiff regulation may age quickly and be obsoleted by technological advancement, so it becomes an impediment to dealing with actual issues.
  • Difficulties of setting effective regulation: The complexity of AI systems and the variety of contexts they may be used within make setting enforceable regulatory frameworks that cover all possible cases a very difficult and failure-prone task. This may lead to creating unstable, faulty, and inconsistent regulatory frameworks that are not practically enforceable. Additionally, determining the appropriate level of intervention in the research, development, and use of AI systems is a challenging task. It requires striking a delicate balance between preventing harm from AI and avoiding the obstruction of innovation. Excessive regulation can hinder progress, while inadequate regulation can create voids in safeguarding against risks.
  • Governments’ overstepping and killing competitiveness: Governments may lack enough expertise or the flexibility required to set regulatory frameworks for a field that is evolving as quickly as AI technology. This may lead to the creation of inefficient regulatory frameworks that obstruct competitiveness and innovations. On the other hand, highly complicated regulatory frameworks may create biases against smaller entities and startups and serve the interests of large and established entities that can afford to adapt to complicated regulations and, thus, better comply with them.
  • Availability of alternatives: Those who support self-regulation say industry-led initiatives and ethical guidelines can efficiently handle potential dangers without the need for government intervention. Investment in education and raising awareness campaigns can help individuals and organizations better understand the potential dangers related to AI and thus help them mitigate damages.

Benefits of Regulation

Besides both reasons and concerns regarding regulating AI technology, one aspect that should be considered is the positive benefits that can be expected from this regulation process. Some benefits are discussed in the following:

  • Responsibility and accountability: Regulatory frameworks for AI technology and industry may clearly determine the responsibilities of the different parties involved in this technology’s development processes. According to these responsibilities, they may also set the level and mechanisms of accountability in case any party falls short of carrying out its responsibilities.
  • Transparency guarantees: Any form of obligatory regulatory framework will require a suitable level of transparency to be observed in the AI technology development processes. Such transparency may limit many gaps that allow different violations. At the forefront of these are violations of the right to privacy related to using personal data in the AI models’ training processes.
  • Guarantees for multi-stakeholder rights: Regulatory frameworks can play a crucial role in clearly defining the rights of various parties involved in AI technology development. These frameworks ensure that the interests of all parties are considered and balanced rather than relying on an existing power dynamic that often favors certain parties with greater power and influence.

Regulation Checks

Developing regulatory frameworks for AI technology and industry is a highly complex process. It is unrealistic to anticipate prompt completion or optimal maturity of this process. Moreover, it is not feasible to assume that a single approach will dominate the establishment and execution of regulatory frameworks throughout the entire process.

All the aforementioned approaches are expected to contribute to the whole system, which will grow with time to regulate AI technology and industry. This system will include domestic legislation, international legal instruments, and self-regulation initiatives presented and implemented by the industry itself. It will also include ethical charters and protocols for best practices.

In all cases, there are considerations that need to be observed when preparing any of these regulatory frameworks. These considerations should guarantee maximizing positive outcomes and limiting any negative consequences. Below are some of these considerations:

  • Flexibilities: Regulatory frameworks for the AI industry should be flexible enough to guarantee its efficiency when dealing with threats while having no negative effects on the field’s evolution. Such flexibility can be achieved by creating mechanisms for implementing regulatory rules in proportion to the changing conditions and cases they deal with.
  • Limit of intervention: Due to the nature of the AI industry, any regulatory framework should have an exceptional sensitivity to balancing the actual needs against development obstruction concerns. Accordingly, such a framework should draw the boundaries of its own intervention in each of the aspects of AI technology development in an accurate and precise manner. Levels of intervention should also be set so that they consider the nature of each aspect separately. This should guarantee that the level of intervention is always sufficient for fulfilling the need without exceeding it or harming the continuity of development.
  • Periodic revision and continuous updating: AI technology is a fast-evolving field. This means that there is always a potential for new threats that haven’t been considered before or that some methods and mechanisms are no longer effective enough. Accordingly, any regulatory framework for AI should include mechanisms for periodic revision and possible emergency response for any novelty that should be dealt with immediately. Such mechanisms may include establishing permanent monitoring bodies or establishing periodic partial or comprehensive revision processes.

Who Should Regulate AI?

Determining who should be in charge of setting obligatory regulatory or guiding frameworks for AI is a complicated issue. There are many parties whose interests are tied to the development of AI technology. All these parties have their expectation for positive outcomes for developing this technology and its different uses mixed with concerns that such development might threaten their interests.

While these parties have common interests, there are large areas of conflicting interests that don’t allow enough mutual trust. In this competitive landscape, various parties strive to shape regulatory frameworks that align with their interests, resulting in conflicting efforts to establish favorable regulations.

This race is further promoted by the fact that the AI field is still free from any regulatory frameworks, with one exception: the EU AI Act. Consequently, any regulatory framework an influential party establishes can set limits and draw trajectories that are difficult for other parties to contradict explicitly when they draw their own regulatory frameworks.

This means that each stakeholder represents a different approach to setting regulatory frameworks for AI due to the multiplicity of these stakeholders and different factors related to AI’s nature. It can’t be expected that any party alone can set regulatory frameworks that cover all the required aspects in a way that ensures a complete agreement with them. What can be expected instead is that several regulatory frameworks will be in force on different levels.

Thus, the different approaches of different parties discussed in the following are not exactly mutually exclusive alternatives that one of them will prevail over. Instead, each will influence the scene of AI regulation in the coming years.

Domestic Legislation

Pros

  • Legitimacy and enforceability: States have the legitimate, recognized authority, whether internally or by other states, to legislate and enforce laws within their borders over their citizens or residents as well as legal persons, like companies, organizations, etc. Accordingly, any regulatory frameworks set by any country or a political entity like the EU for AI technology within their territories have legitimacy and are guaranteed to be enforced, which is not the case with other approaches.
  • Coordination and harmonization: Governments of different states can cooperate to set international standards for coordinating their legislation. This allows consistency among laws and avoids conflicts between the requirements of developing, deploying, and using AI from one country to the other. It also contributes to preventing injustice and lack of equal opportunities among all technology developers and users across different countries.

Cons

  • Slowness and bureaucracy: AI is a rapidly changing field, and the slow processes of preparing, passing, and enforcing legislation may hinder innovation and evolution. The need for some aspects of AI operations to be tied to legislation in anticipation of potential effects and for some development processes to go through bureaucratic procedures enforced by law can lead to significant delays. This can hinder the development of new AI technologies and applications, as well as the ability of businesses and organizations to adopt and use AI effectively.
  • Lack of technical expertise: Many governments around the world may lack specialized knowledge as well as the required material or human resources to set and enforce laws related to a very specialized and complicated field like AI. This may create many issues, including potential gaps in legislation, failing to cover some crucial aspects properly or adequately, or limiting some unharmful activities while neglecting to regulate more dangerous ones.

International Organisations and Bodies

Pros

  • Broad coverage and consistency: Compared to the limited enforceability of domestic legislation, the rules and standards set by international bodies like the UN or influential international organizations like the OECD enjoy possible global enforcement. The possibility of achieving this is higher if such rules and standards are set in the form of binding international treaties, which have the status of law in all countries parties to them. This allows consistency, harmony, and agreement among the governing rules for AI development and use across all or most world countries.
  • Expertise and resources: International regulatory frameworks can benefit from multiple countries’ collective expertise and resources rather than relying solely on a single country. By distributing responsibilities and involving diverse technical specialists from different countries, the processes of preparing and implementing regulations can draw upon a broader range of knowledge, experience, and resources, thereby enhancing their effectiveness and legitimacy.

Cons

  • Limited enforceability: In case the regulatory frameworks are not binding for different countries so as to have the status of law within them, they would necessarily be less effectively enforceable. In this case, enforcing these regulations will depend on how much governments are willing to implement them.
  • Slowness and complex decision-making processes: Reaching agreements among countries with different, oftentimes conflicting interests requires going through complicated negotiation processes that usually take a long time. Also, international regulatory frameworks lack flexibility and the ability to catch up with issues that develop quickly, such as AI. Additionally, the need to find a bottom line for all countries to agree to leads usually to vague formulations that are not accurate enough.

Self-regulation

Pros

  • Flexibility and prompt response to change: The AI industry can always adapt regulatory frameworks that it sets promptly so that it can keep up with technological development.
  • Specialized knowledge: Parties involved in the AI industry have the most specialized technical knowledge making them more capable of understanding the intricate details of the technologies they are working on, as well as setting more detailed and compatible regulations.
  • Minimum regulatory burdens: Limiting government intervention may lower costs and enable innovation and evolution to be faster and more flexible.

Cons

  • Lack of enforceability and accountability: Self-regulation depends on voluntary commitment, which might not be enough to prevent harmful practices that some industry parties may enact in seeking to maximize their interests.
  • Conflicts of interest: Companies may prioritize their interests and profits over users’ safety or ethical considerations.
  • Limits and transparency: Self-regulation efforts may not address all potential threats and may lack transparency in their preparation or implementation.

Multi-Stakeholder Initiatives

Pros

  • Diversity of views and expertise: The coming together of multi-stakeholders, including state governments, industry representatives, academicians, and civil society, may contribute to reaching regulatory frameworks that are more comprehensive, covering different aspects, and more balanced, harmonizing different interests.
  • Legitimacy and trust: Agreement among multi-stakeholders over AI regulation makes it more acceptable to those addressed by it. It also gains more public trust in their enforceability.
  • Flexibility: Cooperative efforts by several parties may adapt regulatory frameworks to the continuous evolution of AI technology. This makes these frameworks more responsive to societal concerns.

Cons

  • Complex decision-making processes: Coordinating different parties, each having a very different nature from the other, may be a very complex, time and effort-consuming process.
  • Need to balance conflicting interests: Reaching an agreement may be difficult when different parties have conflicting interests. Such balance may not be reachable at all.
  • Limited enforceability: Multi-stakeholder initiatives depend on parties outside them to enforce the regulatory frameworks they reach. This may lead to many obstacles in implementing these frameworks consistently and effectively, or even at all.

The Issue of Overlapping AI Development and Use Stages

The overlap between AI development and use extends beyond the rapid realization of theoretical ideas into public products. This overlap is characterized by the parallel implementation of various development stages by the same entities. Consequently, distinguishing one stage from another becomes challenging.

In several instances where significant products were deployed, like the series of large language models developed by OpenAI, research papers about these products are published in parallel with their actual deployment for use by targeted audiences. Most of these papers merge providing theoretical foundations with explaining development processes and the outcomes of experimental deployment of the product itself. This means that the group of people who worked on developing theoretical research hypotheses also developed the application of these hypotheses in the form of a product, then performed practical tests, supervised its deployment, and made it available for targeted audiences.

Unclear, or even absent, boundaries between the stages of developing AI products make setting the regulatory framework for this technology exceptionally difficult. The issue of overlap between the stages of developing AI products should be considered when discussing different suitable approaches for regulatory intervention in each stage.

Research Stages

The main challenges of trying to regulate AI research are trying to find a balance between the openness required for the research process and the potential abuse of it, as well as dealing with ethical concerns related to data and algorithms. In order to deal with these challenges, the preparation processes of regulatory frameworks should take into consideration the following points:

  • Risk assessment: Areas of research with higher risk potentials, like autonomous weapons, should be determined. Mechanisms for more strict supervision should be set for these areas. Risk assessment frameworks should be developed for use by research entities and those supervising their work.
  • Ethical guidelines: A set of ethical principles targeting bias concerns, data privacy, and responsible behavior should be developed and supported for AI research.
  • Data governance frameworks: Frameworks for data access, collection, and use in research should be established responsibly, guaranteeing transparency and accountability.

Development and Production Stages

Challenges for these stages are related to limiting bias, ensuring safety, and establishing standards for development. Dealing with these challenges requires taking the following into consideration:

  • Obligatory test of bias and working on limiting it: Regulatory frameworks should obligate entities working on developing AI systems to assess any potential biases in data or algorithms they use and to deal with them.
  • Safety assessment standards: Standard evaluation methods should be developed to ensure the safety and trustworthiness of AI systems before deployment.
  • Licensing policies: Programs for licensing AI technology developers may be established, obliging them to attain certain levels of safety and ethical criteria. However, licensing conditions and procedures should not create extra burdens for small entities and startups in particular. Licensing policies can also include exemptions based on the size or nature of activities.

Marketing and Deployment Stages

The challenges in these stages are related to the transparency of decision-making processes, accountability for outcomes and consequences, and promoting responsible use. The ability to deal with these challenges depends on the following considerations:

  • Explainability: Encouraging the development of transparent and explainable AI models allows for the understanding of their decision-making processes.
  • Assessment of algorithm impact: The entities marketing and deploying AI systems should be obligated to evaluate the societal and ethical impacts of algorithms used in these systems before their deployment.
  • Controlled tests: AI systems should undergo limited, controlled deployment tests. During these tests, data about these systems’ performance and users’ feedback about the different aspects of their operations should be gathered before they are deployed for use on a wide scale.
  • Data governance frameworks: Along with the guidelines for research stages, frameworks are needed for using data during the deployment stages of AI systems to ensure responsible practices and protect users’ privacy.

Monitoring and Preparedness

To ensure effective AI development and industry regulation, continuous monitoring mechanisms should be established for all stages of development. This includes implementing measures to guarantee prompt response and efficient handling of unforeseen issues, thereby preventing their escalation, maintaining control, and minimizing potential damages.

AI Regulation and Human Rights

Developing AI technology has significant impacts on the current status and future of human rights. Some of these are positive, while others may be negative or represent threats with varying degrees of seriousness to a number of fundamental rights, especially the right to privacy and the right not to be subject to discrimination.

On the other hand, AI technology development processes and industry operations intersect with many rights. Many of these processes imply practicing some rights, like the right to academic freedom and the right to free access to information. Furthermore, AI is a promising field; its development and maximizing its outcomes are counted under the umbrella of the right to development.

Regulatory frameworks for AI technology and industry face a primary challenge, which is ensuring that human rights are not violated on both sides. This means protecting these rights against threats while protecting the rights related to developing AI technology and ensuring benefitting from it fairly.

Before AI technology is allowed to proceed, it should undergo a process of assessing its potential impacts on individuals’ practice of their rights and enjoyment of fundamental freedoms. Regulatory frameworks should also have mechanisms for setting indices and standards to be used for measuring, monitoring, and assessing the effects of their use on basic rights.

Positive Effects

Regulatory frameworks for AI technology and industry can positively affect protecting fundamental rights, especially the right to privacy. This may be achieved by obligating parties to deal responsibly with data in general and personal data in particular throughout all the stages of the development of AI systems.

This includes preparing datasets for training language models. In the deployment stages, this concerns protecting the personal data of AI systems users and the data they own in general. These regulatory frameworks can also positively affect limiting potential biases and discrimination in decisions made with AI systems’ help.

Negative Effects

Regulatory frameworks for AI technology and industry may negatively affect the practice of some fundamental rights if they are too restrictive or intervene in areas where no such intervention is needed. This especially concerns prevention, obstructing research efforts in some areas, or setting overly intrusive monitoring mechanisms.

On the other hand, the complexity of regulations and the mechanisms or procedures they set beyond actual need may turn them into an obstacle to evolution. This is considered a violation of the right to development and the right to free work for concerned individuals or entities.

Conclusion: How Can AI Regulation Solution Be Evaluated?

In this conclusion, the paper seeks to offer some guidelines that may be used to assess different solutions for setting regulatory frameworks for AI technology and industry. Among these are:

  • Comprehensiveness: How far does the regulatory framework cover all actual threats and potential damages for developing and using AI technology?
  • Proportionality of intervention: How far is the regulatory framework intervention level in different AI development and use processes appropriate for achieving its goals?
  • Flexibility: Does the regulatory framework allow enough flexibility in its implementation to be suitable for the different conditions and cases of AI development and use?
  • Catching up with evolution: Does the regulatory framework set mechanisms for coping with the expected changes in the AI technology and industry scene?
  • Transparency and accountability: Does the regulatory framework set checks and mechanisms for ensuring the transparency of its enforcement and accountability in case its procedures are abused in any way?
  • Human rights guarantees: Does the regulatory framework ensure that the rights of concerned individuals and entities are not violated? Does it cover all the potential threats to human rights arising from the processes of AI technology development?
  • Efficiency indices: Do the procedures set by the regulatory framework for its enforcement allow measuring how effectively they achieve their goals?