Negative effects of Artificial Intelligence

It is no secret that technology grows exponentially over time, new tools with multiple utilities are born every day, which facilitate countless tasks for humanity.

Among them is a technological tool that has developed a lot in recent times and that, although until a few decades ago it was nothing more than a distant dream or science fiction, today we can see it as an increasingly frightening reality. And by that, we mean artificial intelligence (AI).

Artificial intelligence can be defined, as John McCarthy, the father of AI, said in a 2004 scientific paper, as “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

Today, AI allows us to quickly process large amounts of data. These calculations help to understand market trends, public opinion, and atmospheric changes. The application of AI to robotics has marked great advances in medicine, telecommunications, and home automation. These applications of AI have changed our daily lives, our homes, and our interactions with others; however, currently, the AI ​​that exists is known as narrow or weak because only a small fraction of its potential is being exploited.

This technological tool is still under development and although successfully achieving its fullness would be one of the greatest achievements in human history, it would not necessarily be for our benefit. As AI could be used to end wars or eradicate diseases, but also it could create autonomous killing machines, increase unemployment or facilitate terrorist attacks.

This paper sheds light on the biggest dangers and negative effects surrounding AI, which many fear may become an imminent reality. These negative effects include unemployment, bias, terrorism, and risks to privacy, which the paper will discuss in detail.

1. Artificial Intelligence and Unemployment: 

According to an OECD report, even with the low development of AI today, 14% of jobs in the world could be affected by the emergence of AI.

OECD explains that certain occupations have the tendency of being affected by AI more than others. It has created categories to explain this impact, from “least exposed” to “most exposed”. Those jobs falling under the “most exposed” category would not surely be replaced by AI, but they will have more of an impact. 

The report also highlights some fields that come under the most exposed category. These are mostly jobs that required highly skilled persons involved in technical jobs such as clinical lab technicians, optometrists, and chemical engineers.

However, a positive aspect of bringing in AI has also been noted in these jobs. People engaged in the most exposed category jobs observed a considerable change in the way they carry out the jobs now. This change has only been making their manual jobs easy and led to an increase in wages as well as education. It has also been found that at present, AI is not anywhere near replacing the workforce but is adding to workers’ productivity. Thus, it can be seen that the studies presented by the OECD are still divided regarding the impact that AI will have on employment and wages.

In conclusion, there is still a whole world of uncertainty about the impact that a more developed AI will have on the supply and demand of jobs. There is a  possibility that this means more productivity and higher salaries for some, but on the other hand, the danger that it will entail for others is tangible if it allows certain processes to be automated, which makes the intervention of man in them is limited and therefore end their jobs.

2. Artificial Intelligence and Bias:

In general terms, a bias is a discriminatory attitude or prejudice that exists in one. And since AI is created by humans, it is not free from bias. While programming, programmers might create an algorithm having their own personal biases. They might be doing it unconsciously or consciously, but either way, some extent of that bias could be observed in the algorithm, making it unfair.

Bias can find its way into AI systems in many ways, through their algorithms. AI systems learn from training data, by which they learn to make decisions. These data may contain biased human decisions or represent historical or social inequalities. Likewise, the bias could enter directly into the programming of the human being who designs the system, based on their own biases.

Similarly, due to the lack of representation, there might be a bias. So those who are not part of the programming, the AI will not recognize their experiences and thus would be biased only to a certain extent of the population. It would not be inclusive.

It is not intended to deny here that an AI system can be developed that is completely free of bias since it is really possible to build a system that makes impartial decisions based on data. But this will only be as good as the quality of its input data, and therefore the AI ​​is not expected to reach a level of complete impartiality any time soon.

Different measures can be implemented to reduce bias in AI systems, such as:

  • Dataset selection.
  • Building diverse Teams.
  • Be more inclusive, thus leading to an inclusive algorithm and reducing exclusion bias.
  • Creating awareness about AI, its functioning, and how it could be following a biased pattern unconsciously. If this is brought to notice, it can be modified.
  • Institutions creating algorithms should be more transparent as to the methods used by them for collecting data on the basis of which the programming is done. This way, the cause can be identified.

Another practice that can be adopted, in addition to the above, is to apply the well-known blind taste tests. It means that what the programmers know is already biased, can be rejected. This would allow the AI to make a judgment of its own and function normally. In addition, there are a number of technological tools that have been designed over time to reduce bias in learning models or algorithms.

3. Artificial Intelligence and Terrorism:

Terrorism, especially, has evolved with the internet and social networks, which have become indispensable for them; be it to recruit followers, buy weapons, spread messages of hate, tutorials, etc. It has also been used to create weapons to specifically target a group by identification and attacks.

Terrorist groups have resorted to AI because it has increased their reach and their speed. It allows them to widen their scope of activities and has also proved helpful for recruitment purposes.

The United Nations Office of Counter Terrorism has released a report on the use of AI by terrorist groups. The report mentions that terrorists have chosen to be early adopters of new technologies, especially when they emerge without being highly regulated or governed. Thus, while new technological tools are being developed that seek to make us evolve as humans, help in our development or preserve our existence as a species, terrorists also obtain new tools to manipulate and use as weapons to spread terror, and AI is no exception.

The list of potential malicious uses of AI is extensive, among them could be mentioned:

  • Enhancing cyber capabilities for denial-of-service attacks.
  • Be exploited by malware developers.
  • Integrating AI into ransomware.
  • Make easier password guessing.
  • CAPTCHA Breaking.
  • Autonomous vehicles to use in terrorist attacks.
  • Drones with facial recognition.
  • Genetically targeted bio-weapons.

On the other hand, it is equally undeniable that AI could be used responsibly to combat terrorism and extremism. AI has especially helped in carrying out counter-terrorism attacks because of its ability to predict terrorist movements. The technology which is being used by terrorist groups to carry out attacks is also being used to track them and thus prevent their pre-planned operations to a certain extent.

So, it could be said that AI has and will have great relevance in terms of terrorism, either positively or negatively. Everything will depend on the development and application that is given, who can access these systems, and how effectively and efficiently governments regulate their use, among other things.

4. Artificial Intelligence and Privacy:

The wider the range of tools or applications that AI entails also means that there are also a large number of problems or risks that it can cause. One of the biggest concerns with AI is privacy. The dependence on AI has made it impossible for us to stay away from it. In all geographies around the world, the reach of technology has brought us in touch with AI.

AI has increased surveillance, it has taken the shape of the “Big Brother” because it is always watching us and tracking the data that we consume. For example, when we search for shoes online on an e-commerce website and then switch to a social media application, we immediately see shoe advertisements on that platform. Thus, AI also plays a role in shaping our decisions. It also evades our private spaces. Devices like smart speakers at home, such as Alexa and Google Home, work on voice commands and know what one does every day. Phones also use iris detection and biometric data. Therefore, AI has access to each of our personal details.

Knowing all the potential and danger that AI implies, it must be subject to the strictest standards of regulation and scrutiny, always prioritizing, in its application, the protection of human rights, among which is the right to privacy.

In terms of hacking, AI is also a double-edged sword, which can mean a solution to cybersecurity problems, improving antivirus tools, facilitating the identification of attacks, automating the analysis of networks and systems, as well as electronic scanning, but it can also become a very useful tool for hackers.

How?

AI is helping hackers to become smarter about the way they carry out criminal activities. For example, AI is being used to conceal wrongful codes. This means that once the application is downloaded, the malware does not attack instantly, but after a certain period of time or when the application has been downloaded by a certain number of people. Till then, the malware would remain dormant, shielded by AI.

Not only does AI allow malware to stay hidden and undetected, but it can also be used to create malware that is an imitation of an already trusted source. It has the capability of self-propagating. It can also be used to create fake identities for people. For example, Instagram bots are the creation of AI.

In conclusion, AI will be another element to be taken into consideration in the technological race between hackers and programmers of cybersecurity systems, which will take this race to a much higher level than the current one, offering endless possibilities to both sides for the development of new tools.

5. Artificial Intelligence and Freedom of Speech and Expression:

Today, we receive most information online or on social media. Social media platforms or intermediaries can control what we consume. Therefore, at times we are not able to make an informed choice because a biased narrative is presented to us through these platforms, thus impacting our freedom of speech and expression. Although at first glance, these issues seem unrelated, a closer look reveals the close relationship between AI and freedom of expression.

Currently, technological tools shape the way people interact, access information, and exercise their freedom of expression. AI can affect how people carry out these activities, through search engines or social networks, for example.

Likewise, in terms of access to information, on which people base their own ideas, AI and algorithms have a great influence on the news supply, shaping, in a certain way, the opinions and decisions of entire communities from the wishes of its programmers.

That is why there is a latent danger that a small group of companies is the ones who manipulate, through the application of AI systems, the information disseminated through news sites, emails, social networks, etc.

This is especially worrying when we talk about opaque AI systems, with little or no transparency, which can selectively exclude or emphasize critical information, thus manipulating, from the root, the decision-making of a community, which could be as large as a city, entire country, or continent.

Case Study: Social Media Platforms and AI

As has been emphasized throughout this paper, the positive utility that AI could have depends on the intention and methods of its developers and users, who could be either terrorist group or Non-Governmental Organization that watches over Human Rights.

This can be transferred to the way in which social networks are treating hate messages or misinformation that can be spread through them by AI systems. A recurring case in complaints is that of the social network Facebook, developed by the company now renamed Meta, where its spokespersons have repeatedly justified the use of its algorithms with the excuse that they help in the fight against said problems.

Contrary to this, The London Story Foundation has published its findings on how Facebook’s business model is enabling the “infodemic” of COVID-19 in the Netherlands, to give just one example among the vast denunciations that have been published.

According to the study, Facebook, far from combating disinformation and hate messages, rather contributes, at least passively, to these problems. In the wake of COVID-19, Facebook introduced a policy to curb disinformation. However, Facebook was not able to execute the policy and thus allowed the spread of disinformation on the platform, widely. This impacted not only the lives of people but it has also been reported that in the Netherlands, election campaigns were heavily impacted. Fake news along with disinformation was one of the factors which helped far-right politicians to garner votes.

The report also highlights how COVID-19 misinformation generated distrust in the COVID-19 measures adopted by the Dutch government. It led to anti-vaccine and anti-mask campaigns, reducing COVID-19 to a hoax.

According to the results of the study, it seems that the problem would not be a failure or poor development of the AI ​​through which Meta identifies these messages that the company has expressly committed to eliminate and reduce, but rather it is the interest of the company to publicize these messages, which in addition to manipulating public opinion, generates large revenues, as most companies on social media do these days.

At least that is the intention that can be extracted from its algorithms, which amplify polarized narratives and do not censor disinformation. That is why, in the case of Facebook, AI only fails to the extent that the company does not have a real commitment to the fight against hate messages and disinformation, but rather, it is Meta who, by programming this AI system, establishes limits that are too general that these “failures” entail in the moderation of content on the social network, especially when said content is not in English.

Conclusion

We have not yet discovered the exact risks associated with AI and how it will influence our lives. A lot needs to be done with respect to its regulations and its use by governments, private organizations, and individuals. Until now, public opinion and regulatory responses have been relatively subdued.

While innovation is good, it has also proved to be harmful. AI does not exist by itself. A human has to program it, train it, and use it. It can be made to bend whichever way we make it, we can control it to a certain extent. It can be a positive force in dealing with economic crises, climate change, and the pandemic. On the other hand, If all that software is poorly designed, trained, or used, it can be very dangerous.

You can find both advantages and disadvantages of AI, which makes it a sometimes-controversial issue that requires deep analysis, both from a utilitarian and ethical perspective.

Currently, it seems evident that the advantages of using AI are greater than its disadvantages. However, it is also evident that engineers and designers, as well as society as a whole, must take into account the aforementioned disadvantages in order to face them with the depth and solidity that the subject requires. AI is here to stay and, little by little, it permeates more and more spaces of our daily life.

Investigating the ethical limitations of intelligent system design is one step. But it’s not enough. If we are already surrounded by a “society of artificial agents”, we must also think about how social relations between intelligent systems and us are regulated ethically.

An ethical choice is not easy to make. Globally, in a system made up of thousands or millions of pieces of intelligent software. A problem that replicates in technology, is one of the classic dilemmas of human coexistence in society.

To mitigate the ethical risks of AI, we need to take a more active role in its development. Responding to these challenges is possible from a technological humanism that puts people at the core of efforts. Although it sounds paradoxical: if you want to put the human being at the center of the concerns related to AI in our lives, we cannot fail to design a robot policy that guides our development in a competitive, rigorous, and at the same time holistic way. This serves to minimize the gaps not only between humans and machines but also between person and person.