AI and Democracy: How is Decision-Making Being Reshaped?

Introduction

The accelerating development of artificial intelligence (AI) technology has become an undeniable fact of our daily lives, to the extent that it no longer astonishes many. At least not with the same intensity as the surprise that accompanied the launch of the ChatGPT conversational application at the end of 2022. As the initial excitement surrounding this technology’s intrusion into the lives of millions, and their direct use of it for the first time, gradually fades, it becomes possible to discuss future aspirations, as well as its pitfalls and threats.

In reality, AI applications have been present in the lives of millions for years. Some of these applications have had direct impacts on the lives and destinies of tens of thousands. Employment applications, decisions on social welfare eligibility, and parole qualification from prisons are just a few examples of applications used to make critically important decisions in the lives of those affected, years before the advent of ChatGPT and similar applications.

Even before ChatGPT’s emergence, AI algorithms on social media platforms selected the content their users saw and suggested people or pages for them to follow. AI models were also used not only in election campaigns but also in directing public opinion to influence election outcomes. This demonstrated that this technology could profoundly impact political reality and, in the most pessimistic scenarios, pose an existential threat to democracy itself.

Since 2016, European Union institutions embarked on preparing the first regulatory legislation for governing AI technology. The slow and arduous process took several years. The European AI Act finally appeared in 2023, amidst the uproar caused by the release of ChatGPT, and despite all the effort invested, it seemed outdated, already surpassed by new developments.

The threats and pitfalls of using AI technology encompass almost all aspects of daily community life: from job displacement to environmental threats, from national security threats to threats to the right to privacy and freedom of expression. However, the threat this technology poses to democracy and the democratic decision-making process holds a unique position. The usurpation of people’s will and their right to manage their affairs implies the inability to confront the threats of AI technology in any other domain.

The landscape of the relationship between the various uses of AI and the democratic decision-making process is immensely broad and complex. Therefore, this paper begins by discussing the long-standing relationship between the use of information in decision-making processes and how this process expresses popular will, drawing a line intersected by numerous AI applications.

The paper also discusses the pathways of democratic decision-making, each of which in turn provides further points of intersection with AI applications. Finally, in the same section, the paper discusses the different forms by which AI technologies can be classified. It reviews the areas of intersection between each of them and the democratic decision-making process.

Additionally, the paper addresses the aspirations and concerns raised by the direct use of AI technology in the democratic decision-making process and its indirect effects on it. It also explores the possibilities of AI technology supporting more democratic decision-making pathways and presents a vision for a better future for the democratic decision-making process, proposing the necessary conditions for this vision to be realized.

Any decision-making process relies on information—its availability, comprehensiveness, credibility, and the reliability of its analytical outcomes. In contrast, democratic decision-making distinguishes itself by requiring that it express the popular will. The reliance on collecting as much information as possible in governance is one of the primary characteristics of the modern state, differentiating it from earlier forms of rule. As modern societies grew more complex and governmental bureaucracy expanded, the volume of data needing collection and analysis for governance-related decisions increased. Concurrently, the need for specialized knowledge and expertise to handle this data also grew.

This increasing dependence on specialists to acquire the information upon which political decisions are built has given rise to what is known as technocracy, or the rule of experts. One implication of this phenomenon is the wresting of decision-making authority in key aspects of governance from the hands of politicians, whether they represent the people or exercise authoritarian, non-democratic rule. Instead, this authority has shifted to national or international bureaucratic institutions.

Examples of such institutions that now wield significant political influence include intelligence agencies, security apparatuses, and central banks in various countries, as well as international bodies like the International Monetary Fund. Critically, those who manage these institutions are often not elected by the very people whose daily lives are profoundly impacted by their decisions. They hold the authority to make these decisions based on their specialized expertise, which ultimately stems from their monopolization of skills in utilizing information within their fields of specialization.

Unlike any previous technology for handling data and information, artificial intelligence (AI) technology opens a new and distinct chapter in the relationship between information and decision-making. For the first time, it becomes possible to replace specialized human expertise as an intermediary in this relationship. Theoretically, the alternative in this case might not monopolize its role. AI technology is a tool that anyone can use. This means any concerned party can use this tool to collect data relevant to a specific issue. They can also analyze it to extract information clarifying the various factors influencing the issue, and then identify different decision alternatives and the potential impact of each.

There’s no doubt that AI systems and models will increasingly mediate between data and information and political decision-making processes in the future. What makes this certain is that these systems and models will perform this mediating role with greater efficiency and lower cost than human alternatives.

The impact of this on the democratic nature of decision-making processes will depend on several factors. Chief among these is who will possess the authority over the design decisions for AI systems and the power to direct their development pathway, and to what extent access to these systems and models will be equally available without discrimination. The fact that these factors remain unresolved today means the future is open to several alternatives, some negative and others positive.

Pathways of Democratic Decision-Making

Beyond the interplay between information and popular will, the democratic decision-making process unfolds through various pathways. These pathways ultimately determine the participating actors and the extent to which the process reflects the popular will. They also create different points of intervention where AI technology can play multiple roles. This section explores these pathways and how AI can be used to influence them.

The primary pathway in the current reality of democratic systems involves citizens electing representatives to legislative and executive positions. This essentially delegates decision-making authority to these representatives on behalf of the citizens. This pathway presents numerous points of intersection with various uses of AI systems and models.

These AI systems and models can play several roles throughout the electoral process, which determines who will ultimately hold decision-making power. These roles include candidates using AI tools to develop their electoral agendas, formulate policies to present to voters, plan and manage their election campaigns, and produce promotional content.

AI tools can also be used to influence election outcomes by steering voters towards specific candidates or policies. Conversely, AI tools can be employed in overseeing the electoral process and providing safeguards against manipulation.

Furthermore, AI tools can be utilized in the decision-making process itself, specifically in the stages of data collection and analysis. This includes deriving policy alternatives based on various factors influencing their success and effectiveness, as well as their anticipated impacts and outcomes.

The second pathway for decision-making involves institutions, organizations, and groups representing vested interests or diverse ideologies and objectives, seeking to influence the process through various means of pressure. These actors can leverage AI tools to formulate policies they advocate for, based on factual information, or to determine their stance on proposed policies from other parties, based on how well these policies align with their interests or goals.

The third pathway involves direct popular pressure aimed at influencing decision-making on issues that capture public attention. Such pressure may rely on traditional mechanisms for expressing opinion, such as demonstrations, sit-ins, and strikes. Increasingly, it also manifests through digital communication tools, especially since the widespread adoption of the internet and, more notably, after the emergence of social media platforms.

Social media algorithms, which rely on AI technology, play an influential role in the spread and amplification of prevailing public opinion trends. Additionally, AI models are increasingly used to create content for expressing positions on different issues and attracting more supporters, whether this content is based on information or appeals to emotions and biases.

Artificial Intelligence and its Parameters

Artificial Intelligence is not a single technology but rather a family of diverse technologies. This section discusses the various forms AI technologies can take and their respective areas of application in the democratic decision-making process.

Predictive Analytics Systems

Predictive analytics models typically operate on extremely large datasets. The data they process represents various factors influencing a given process. By analyzing this data, these types of AI systems aim to predict the expected outcomes of the process they are studying. Predictive analytics systems can be used to explore the likelihood of a specific decision or policy succeeding in achieving its objectives. They can also be used to forecast the results and consequences of making a particular decision or pursuing a specific policy.

The intersection of these systems with the democratic decision-making process depends on who uses them and for what purpose. Individuals and institutions in decision-making positions can use them to arrive at decisions and policies with the best chances of achieving particular goals. They can also use them to identify any potential negative impacts of their decisions or policies, enabling them to prepare plans to address these outcomes or effects.

On the other hand, if relevant stakeholders in the democratic decision-making process, such as citizens, political parties, special interest organizations, civil society organizations, and various media outlets, are given access to predictive analytics systems and the necessary information, they can determine their stances on various decisions and policies based on their anticipated effects on themselves and those they represent. They can also play an awareness-raising role by explaining these results to the general public.

Natural Language Processing (NLP) Systems

Natural language here refers to the everyday spoken and written language used by humans in their conversations and writings, as opposed to artificial languages like those used in software development. Natural Language Processing (NLP) systems are used to understand and produce written text or spoken conversations. These systems serve as a suitable bridge between various AI systems and ordinary people, who can easily express themselves using spoken and written language and understand what is presented to them through it. For this reason, conversational robots (chatbots) are among the most prominent applications of NLP systems.

NLP systems can be used to detect public opinion trends by analyzing responses from representative samples in public opinion surveys on specific issues. They can also analyze social media user posts to identify issues of greatest concern to citizens and their primary stances on them.

Additionally, conversational applications can answer citizens’ questions about public affairs and government policies, offering them simplified explanations of current or proposed legal legislation. These systems can also serve as an interface for predictive analytics models, providing citizens with clarifications on the expected impact of public policies or legislation on their lives.

Decision Automation Systems

AI systems have been increasingly used for years in small-scale decision-making processes. For example, many private and public institutions now use AI systems for employment decisions, disciplinary actions, and promotions. Similarly, some governments use AI systems to decide on immigration and asylum applications and to rule on early release for prisoners. There is no technical barrier preventing the development of AI models specialized in making decisions on a larger scale, including political decisions.

Some may argue for the need to use these systems on a large scale primarily in crisis management and responding to urgent risks and threats. Specifically, these systems can be vital in countering large-scale cyberattacks that could target critical state infrastructure and highly sensitive facilities. Given that cyberattacks can occur suddenly and evolve very rapidly, confronting them necessitates early detection and immediate response.

Conversely, others advocate for the broader use of these systems across various areas of political decision-making, claiming that this ensures the neutrality of the decision-making process and its exclusive focus on achieving the public good. However, this view disregards the bias inherent in some AI models due to the biased data they were trained on, whether intentionally or unintentionally.

Generative AI Systems

This category includes AI models that use language processing technology to generate texts in natural language. The applications of these models and systems have evolved and expanded tremendously in recent years. Today, generative AI models can produce content that utilizes all mediums humans interact with.

In addition to readable text, these models generate photographs, audio clips, and video clips, and they also produce software code. The content created by these models is characterized by its increasing resemblance to human-produced content, to the extent that, in many cases, it’s impossible to distinguish between entirely artificial content and traditionally produced content.

Unlike decision automation models and systems, which can, at least theoretically, completely bypass human will, generative AI systems can influence human will through the content they produce. Particularly, when these models are combined with predictive analytics systems—which can identify the responses of individuals with specific characteristics to specific content—they can be used to generate content specially designed to elicit predetermined responses from the individuals exposed to it.

This type of AI system raises a difficult question: When can the influence of specific messages be judged as undermining free will? Ultimately, political messages have long employed various methods to influence public consciousness, whether by appealing to their intellects or by provoking or exploiting their emotions and biases.

Does the democracy of a decision relate to the extent of public participation, or does it extend beyond that to how the will of these publics is formed? Even consensus on rejecting practices like Deepfake is not guaranteed. Does merely creating a message mean it is false? Or is it also necessary for its content to be untrue, i.e., contrary to reality?


Potential for Supporting More Democratic Decision-Making Pathways


Fostering Democratic Dialogue Based on a Better Understanding of Reality

Democratic dialogue is open to everyone equally and without discrimination. Such dialogue can be an ideal tool for democratic decision-making, provided its effectiveness is guaranteed. Obstacles to this effectiveness typically include severe polarization and a lack of common ground among different parties. This sometimes leads to the impossibility of agreeing on solutions that balance the interests of all while requiring each party to make acceptable concessions. Therefore, the availability of reliable information about realistic options and the extent to which each option fulfills the interests of various parties in the short and long term is one way to overcome these impediments to effective democratic dialogue.

AI systems and models can be a source of information upon which effective democratic dialogue is built. For this to be achieved, these systems and models must be accessible to all, transparent, free from biases, and reliable in terms of their accuracy and comprehensiveness. This fosters trust in the information provided by these systems and models, helping different parties build positions based on this information and contributing to a common ground for effective democratic dialogue.

Supporting the Electoral Process for Better Representation of Societal Interests in Decision-Making

AI systems and models can be used in multiple ways to support elections of all types and levels. They can be employed to develop more realistic electoral platforms that better serve the interests of the largest number of societal groups. They can also be used to organize election campaigns that present these platforms in the best possible way, targeting different segments of society with the optimal approach for addressing each.

On the other hand, AI systems and models can be utilized to evaluate the realism of electoral programs presented by parties or candidates, and to determine the extent to which proposed policies in these programs serve the interests of specific voter demographics. This allows voters to make informed choices based on what best fulfills their aspirations and responds to their demands.

Furthermore, AI can be used to design more reliable tools for overseeing various stages of elections. These tools can detect any violations of fair election laws and rules, including methods of manipulating campaign finance, electoral propaganda irregularities, and techniques for falsifying election results.

In conclusion, AI systems can play various roles that collectively ensure the electoral process leads to genuine representatives of different societal groups reaching decision-making positions. This makes the electoral process a means for more effective popular participation in decision-making.

Traditionally, expanding direct citizen participation in decision-making processes has been hampered by the limitations of previously available tools for collecting and analyzing the opinions of millions of citizens on detailed issues affecting their lives. For example, it’s practically impossible, using methods available until recently, to hold a referendum on every decision that needs to be made regarding public affairs.

However, reality is also too complex to reduce most public issues to a simple yes or no question. What’s needed is to enable citizens to express their opinions on issues that concern them and affect their lives in unrestricted terms, not bound by predefined options. Extracting specific policy directions and decision-making guidance from millions of freely phrased responses was, until recently, almost impossible. This is where AI technology comes in: its systems and models are specifically capable of achieving this without wasting time or human resources.

AI models specializing in processing and analyzing big data can be used to build tools for receiving citizens’ opinions on public affairs. These tools can target citizens at both local and national levels and, through analyzing their opinions, distill policies that, with the greatest possible accuracy, represent their priorities, urgent needs, and future aspirations.

Such tools might help support a more democratic decision-making process than simply allowing citizens to vote directly on predefined options. Often, citizens can’t recognize how suitable a particular decision is for their priorities and aspirations, but they are always capable of expressing these priorities and aspirations in their own language.


Concerns Regarding the Misuse and Abuse of Artificial Intelligence


Limitations of Error and Absence of Tools to Predict or Measure its Magnitude

No process is entirely free from the possibility of errors. In practical terms, there’s always an acceptable margin of error in any operation. This margin is defined based on the maximum error that does not undermine the process’s consistency with its objectives. However, with AI models, there’s a genuine difficulty in determining the permissible error margin in their operations.

This is not only due to the immense complexity of these models, but in the case of some AI technologies, their models can act as black boxes that are not easily penetrable. In many instances, it’s practically impossible to know what an AI model might achieve, or what impact increasing its capabilities at a certain rate, or modifying the datasets used in its development and then in its actual operation, might have.

Moreover, the errors made by AI models can be grave. For example, generative AI models routinely fabricate non-existent information, including inventing false sources for their data. This type of error is unacceptable in political decision-making processes, which can affect the lives and interests of millions of citizens. This is especially true if these errors cannot be predicted or their magnitude measured beforehand, and some are difficult, perhaps even impossible, to detect.

Design Bias and Data Bias

AI systems and models are, ultimately, human products. They are susceptible to reflecting the biases of those who design and develop them, whether consciously or unconsciously. When it comes to designing tools that contribute to political decision-making processes, the chances of biases being present are greater, and their impact can be profound.

Currently, conversational applications relying on AI models are already being directed to block or modify their responses for purposes such as not supporting terrorist entities or avoiding offense to social groups. While these directives may seem legitimate on the surface, what constitutes a “terrorist entity” or “offense to a group” is often influenced by the nature of the data on which the model was trained, which may be biased and reflect certain views and stances over others.

Achieving access to datasets that represent a balanced spectrum of different political stances is practically impossible. The representation of these stances is subject to a large number of factors, leading to dominant biases being represented far more extensively than marginalized ones, and they do not reflect the true weight of these biases in reality.

For instance, if datasets rely on internet content, the rates of content creation are directly proportional to economic income and education levels in any society. This means that higher-income and more educated groups will be represented in any dataset at a much higher rate than their proportion in society. Conversely, in certain societies, the poorest segments may be almost entirely absent from internet-derived datasets.

Deliberate Manipulation in Design and Datasets

Beyond the potentially unintentional or well-meaning bias in the design of AI models or the selection of their training datasets, there’s always room for deliberate manipulation in both aspects. For instance, AI models can be intentionally directed to present false or distorted results with the aim of misleading their recipients.

It’s impossible to fully enumerate the ways in which an AI model’s design can be manipulated or controlled during its operation. Similarly, it’s difficult to list all the methods by which datasets can be selected, modified, or fabricated in a way that leads to significant deviations in the functioning of the AI models relying on them.

AI models, especially generative ones, can be designed or directed to produce responses and content that reinforce racist or gender stereotypes hostile to women, or to promote racist, sectarian, or gender-based hate speech. Similar results can be achieved by using carefully selected datasets that encourage models relying on them to automatically produce such content without direct instruction.


Threats of Decision-Making Confiscation


The first section of this paper discussed how AI technology can serve as an ideal alternative to specialized human expertise in mediating between information and the decision-making process. This technology possesses capabilities never before available to specialized human expertise. It can process vast amounts of data and analyze it with high precision, enabling it to arrive at options that consider an unlimited number of factors.

Moreover, AI technology can assess and quantify variables that were previously difficult to account for, particularly concerning human psychological responses and expected reactions. These capabilities are enticing and could lead to increasingly entrusting AI systems and models with the reins of the decision-making process, under the argument that they are not susceptible to human biases and emotions, especially unconscious ones.

In scenarios that might seem extreme today but are not impossible in the future, this threatens the complete confiscation of human will. In other words, it poses a threat that AI systems and models could practically govern, while politicians elected by their people are reduced to mere operators of these systems.

Manipulation of Public Opinion and Directing the Democratic Decision-Making Process

This threat is a tangible reality that has been successfully demonstrated in influencing electoral outcomes, most notably the 2016 US presidential election. However, the evolution of AI models in recent years makes that initial experience merely a rudimentary example of what AI can now achieve.

Suppose the ability to collect vast amounts of personal data about citizens, and the capacity to analyze their psychological responses, are integrated with the deployment of generative AI technologies to produce multimedia content that is difficult to verify as fake or artificial. In that case, a scenario becomes conceivable where AI is used to steer a critical mass of citizens toward predefined objectives. This could include influencing their electoral choices in favor of specific parties, prompting them to adopt influential stances to affect decision-making processes, or even, in the worst cases, exacerbating sectarian tensions and inciting civil violence.

The ultimate outcome of this threat is the usurpation of the free will of the masses. This represents a worse scenario than absolute technocracy because the democratic pathway to decision-making would ostensibly remain intact. In fact, this scenario could be fully realized even within the most democratic decision-making pathways, meaning those that allow for the greatest possible direct public participation. Furthermore, with the accelerating development of AI, detecting such practices of manipulating public opinion and citizen orientations could become exceedingly difficult.

Towards a Roadmap to Ensure AI Supports Democratic Decision-Making

Artificial intelligence technology is a tool whose ultimate power cannot be predicted. Indeed, it might transcend being merely a tool over which its designers and developers have complete control. This paper has briefly discussed various manifestations of AI technology’s potential impact on the democratic decision-making process. The conclusion to be drawn is that these potentials, whether negative or positive, are enormous and can fundamentally and decisively change how humans manage their lives. Recognizing this truth means that a clear roadmap to ensure the avoidance of existing and potential negative impacts of AI on the democratic decision-making process is an urgent necessity that should not be delayed.

Requirements for Regulating AI Governance

The overarching regulation of AI development processes should be the priority. Without such regulation, it’s impossible to guarantee the avoidance of the potential negative impacts of this technology’s unchecked development. Indeed, recent years have shown that the profit maximization motive and the competition for a larger share of potential markets, which drives technology companies’ initiatives and plans, are increasingly leading to an uncontrolled trajectory for this technology’s evolution.

Moreover, countries worldwide are, in turn, under immense pressure due to competition among themselves to possess AI technology. This leads to the most competitive nations—which are also theoretically those under whose legislative jurisdiction major corporations fall—being more hesitant to impose regulatory frameworks for AI governance. Their fear is that such frameworks might hinder the development of this technology within their borders.

While AI regulatory frameworks enacted by some countries in recent years have considered certain aspects that could affect the democratic decision-making process, most of these aspects, and those with the deepest impact, still remain inadequately covered.

Furthermore, the political will in the most influential nations does not take the potential threats of AI to the democratic decision-making process seriously enough. Recent trends in most of these countries indicate their continued support for the current trajectory of this technology’s development, especially the monopolization by major corporations of decisions guiding this development.

This existing situation clearly demonstrates that any effective regulatory frameworks for AI governance must establish democratic governance that transcends narrow national interests. This means that control over the direction of AI development should not be left to competing, profit-driven technology companies. Likewise, competing nations, in turn, cannot establish effective regulatory frameworks for AI governance. Therefore, there is no alternative but to establish and enforce these regulatory frameworks at an international level.

On another note, protecting the democratic decision-making process from the potential negative impacts of AI requires that any regulatory frameworks impose procedures to ensure that the development, deployment, and operation of AI systems and models adhere to the fundamental principles necessary for ethical governance. Foremost among these principles are transparency, ensuring freedom from bias, human oversight, and the protection of vulnerable groups.

For the political decision-making process to be democratic, it must express the popular will by ensuring the broadest possible citizen participation. However, this will faces a dual threat from certain uses of artificial intelligence. On one hand, popular will may be overridden through the monopolization of the decision-making process by AI systems and models, either by completely excluding human intervention or by confining this intervention to a limited number of actors who own these models or can afford their cost. On the other hand, the freedom of this will itself is threatened through the use of AI to deliberately steer citizens’ choices, which strips popular participation of its true democratic essence.

There are several ways to try and avoid both of these paths. Among these is using regulatory frameworks for governing AI technology development to prohibit certain practices and types of AI models. For instance, the use of AI models could be forbidden if it allows them to transcend merely providing alternatives for public affairs decisions or policies.

Other practices, such as using personal data to model individuals’ psychological characteristics and exploiting them to directly influence their free will, can also be prevented. Additionally, these frameworks can impose special restrictions and oversight requirements on the use of AI models for election campaigning purposes.

On another note, and paralleling what regulatory frameworks can offer, the most effective way to counter the threats of AI systems and models might be to use these very systems and models to protect and empower popular will. AI systems and models can be deployed to detect misinformation and fabricated content, and to provide citizens with accurate and reliable information.

This task may be more suitable for independent entities, such as media outlets or non-profit organizations like civil society organizations and activist groups concerned with defending democracy and human rights. It also applies to those seeking to liberate digital technology in general from the control of major tech companies and governments.

A Vision for a Better Future

A previous paper discussing the impact of social media platforms on the future of democracy argued that the profit-driven business model of these platforms makes eliminating the threats they pose to democracy nearly impossible. The same applies to the threats that the development of AI models pose to the democratic decision-making process.

The core issue lies in the dominance of major technology companies over AI development processes and the fierce competition among them to achieve the highest possible return from marketing this technology’s applications. This is further exacerbated by the linking of nations’ national interests to the ability of their companies to compete in a market upon which their security and destinies now depend.

This situation weakens the regulatory frameworks that states can put in place for AI governance. States will remain keen not to deprive their companies of any competitive advantage. Conversely, countries that are not leaders in this field are careful not to impose strict legislation that might push major companies out of their markets, especially if such restrictions threaten lower profits compared to what those companies achieve in other markets. The result is that these countries deprive themselves of the benefits of advanced AI applications.

While repressive regimes are unlikely to impose regulatory frameworks that, without exception, criminalize the exploitation of AI systems for purposes such as surveillance, censorship, and hacking information systems, democratic countries may criminalize these uses within their borders and against their citizens. However, they will not criminalize their companies’ development and marketing of these technologies.

The implication is that the current circumstances of AI development subject the fate of the democratic decision-making process to the outcomes of this development. This occurs at a time when the momentous decisions for developing this technology are made by entities that are not democratically managed and are not practically subject to institutions elected through democratic means.

The only alternative to this contradiction is its reversal: subordinating AI development to the democratic decision-making process. Ensuring this is achieved genuinely, and not just superficially, requires two things:

Considering AI a strategic resource owned by the people, with their participation in its governance through the tools of democratic practice.

Utilizing AI itself to improve democratic decision-making processes to ensure broader popular participation. Relying solely on electoral entitlements as the exclusive means for people to manage their affairs in a representative democracy makes it easy to strip it of its democratic character in the age of AI. This means that only those who can exploit AI’s unprecedented capabilities to control the course and outcomes of electoral processes will be represented.

Conclusion

With the rapid development of artificial intelligence, the line between the present and the future has become less clear; technological advancements once presumed to belong to tomorrow are now a reality today. In light of the fundamental threats this technology poses to the democratic decision-making process, caution becomes a logical necessity, and a rapid response to confront these threats becomes urgent.

Despite these risks having elicited widespread reactions, expressed by researchers in their studies, parliaments in their discussions, and media and social media platforms in their various forms, the actual and serious action to address them remains slow and hesitant.