Artificial Intelligence: Balancing Innovation and Human Rights

Introduction

Artificial intelligence (AI) is a rapidly developing technology that is having a profound impact on society. AI is being used in a wide range of applications, from healthcare to transportation to warfare. Companies and governments are already deploying it to help in decision-making that can significantly affect societies and individuals. AI provides numerous benefits for human development, but can also be a source of potential risks. As AI becomes more powerful, it is important to consider the potential impact of AI on human rights.

Human rights are at the epicenter of what it means to be human. Ensuring that human rights are reinforced and not undermined by AI is among the major factors that will shape the world we live in. This is because of the fact that AI-driven technology is affecting every person’s life – from social media apps to smart home appliances – and public authorities are increasingly using it to evaluate allocate resources, evaluate people’s skills or personalities, and for decision-making that poses serious and real repercussions for people’s human rights.

Thus, it is important to find the right balance between AI development and human rights protection. Understanding the negative impacts of AI on human rights is a critical starting point in order to devise measures that can ensure that society leverages its potential while addressing the negative consequences posed on people’s rights.

AI, and especially its subfields of deep learning and machine learning, can only be neutral in appearance. However, underneath the surface, AI can be quite personal. Basing decisions on mathematical calculations can be monumental in many sectors, but over-depending on AI can also harm users, orchestrate injustices, and constrain people’s rights. AI, its processes, and its systems can potentially change the human experience. Despite these negative consequences, AI governance principles fail to mention human rights, which is an error that needs urgent attention.

Loss of Privacy

AI can be used to collect and analyze vast amounts of data about people, including their personal information, their online activities, and their physical movements. This data can then be used to track people, target them with advertising, or even discriminate against them, and can lead to a loss of privacy. This could have a significant impact on people’s sense of security and well-being.

While smart businesses seek ways that can enable them to fulfill their strategic objectives, and developing AI systems, while still nascent, can be suitable for some use cases. In contrast, firms have little incentive to embed privacy protections into their systems. While key privacy breaches have created breathless headlines recently, there has been little fallout for the organizations responsible. The development of AI has failed to prioritize the importance of privacy. In addition, the processing of personal data poses a high risk to people’s rights and freedoms.

Some of AI’s privacy challenges are:

  • Data spillovers: Data gathered on people who are not targets of the data collection process
  • Data persistence: Entails data existing longer than human subjects who created it, influenced by the low costs of storing data
  • Data repurposing: Using data beyond the originally intended purpose

AI-driven data collection raises privacy issues, such as informed consent, voluntary participation, and the ability of someone freely opting out. An example is the violation of Canadian privacy laws by the US company Clearview. The company collected Canadian adults’ and children’s photographs for facial recognition and mass surveillance without their consent for commercial sales, which is a violation of people’s privacy. An investigation of this issue by Canada’s Office of the Privacy Commissioner (OPC) disclosed that information scraping violated the terms of service of, for instance, Venmo, YouTube, Facebook, and Instagram (section 6.ii.). In addition, despite Clearview AI claiming that information can be freely found on the internet and consent is not mandatory, the OPC established that there is indeed a need for express consent in the case of sensitive biometric information and when its disclosure or use is outside the individual’s reasonable expectations.

Because data is believed to be AI’s lifeblood, some of the sensitive data is protected health information (PHI) and personally identifiable information (PII). Thus, determining the level to which AI utilizes PHI and PII, including biometrics, is important. Data repurposing by retaining PII becomes a privacy issue when the original purpose for data collection and processing expires.

Furthermore, invasive surveillance can promote unauthorized data collection, and erode individual autonomy, thus compromising sensitive PII and leaving people susceptible to cyber attacks. Such issues are usually aggravated by the influence of Big Tech companies that have large quantities of data at their disposal and substantial impact on data collection, analysis, and use.

An example of a violation of people’s privacy is Google’s location tracking. Privacy concerns have made the company’s location-tracking practices to be intensely scrutinized in recent years. Google tracks its users’ location, even without them giving explicit permission for sharing their location. This revelation emerged in 2018 when an investigation by the Associated Press established that Google services continued storing location data, even after users had turned off their location tracking. This clearly breached user privacy and trust, imposing the company to significant backlash from users and privacy advocates. A major issue with the company’s location tracking practices is that it paves the way for potential misuse of personal data.

Data Protection

AI systems are often trained on large datasets of data, which can be vulnerable to hacking and other forms of data breaches. This can lead to the unauthorized disclosure of personal information, which can have a significant impact on people’s right to confidentiality and integrity. AI systems also violate many of the General Data Protection Regulation (GDPR). The GDPR is premised on seven important principles that guide data protection: purposeful limitation, fairness, transparency, and lawfulness, data minimization, accuracy, accountability, storage limitation, and confidentiality and integrity.

AI technology’s challenge is the potential for bad actors to misuse it. It can be used for creating convincing fake videos and images, which could be used for spreading misinformation or manipulating public opinion. In addition, AI can be utilized for creating highly sophisticated phishing attacks, thus tricking people to disclose sensitive information or click on malicious links. Creating and disseminating fake images and videos can seriously impinge on privacy and data protection. It is because fabricated media usually features real people who may not have consented to the use of their images in this manner. This violates the principle of lawful, fair, and transparent use of data, which requires intended data use to be disclosed efficiently and clearly. The intention of this principle is to create transparency in data sharing in order for stakeholders to be aware of how their data is processed.

The role of Facebook users’ psychographic profiling and the privacy implications in Cambridge Analytica’s 2016 US presidential elections are examples of how people’s confidence in organizations has been eroded since they feel that their privacy is often violated and data is not protected. The examples also demonstrate that the companies violated data protection principles, particularly purpose limitation. The purpose limitation principle states that data should be collected for explicit, legitimate, and specified purposes. However, the 2016 Cambridge Analytica example illustrates that the company processed data in a way that is incompatible with the intended purposes. It is because the purpose limitation principle emphasizes that data should not be stored and repurposed for other means than was initially revealed to the data subjects. In addition, it violates the data minimization principle that limits data use to its essential needs.

The Cambridge Analytical Scandal also demonstrated how personal data collection by AI can infringe on confidentiality. The data was used for manipulating elections, meaning that it was not protected against unlawful or unauthorized processing and accidental loss. Confidentiality implies maintaining a customer’s privacy and using data in a way that is respectful and discrete of the customer’s privacy and information, but the scandal failed to observe this principle of data protection.

The right to data protection against unintended use is also violated through surveillance. AI can be used to automate surveillance, allowing governments and corporations to monitor people’s activities on a massive scale. This can be used to suppress dissent or target people for harassment or persecution. Persistent surveillance, continuously observing a location or an individual, is popular among law enforcement agencies to get information about a suspect or an enemy. However, the constant monitoring and collection of vast amounts of personal information can lead to potential privacy breaches and violations of individual rights. Automated surveillance systems often operate without clear guidelines and oversight, resulting in the risk of indiscriminate data gathering and misuse. Furthermore, the widespread adoption of automated surveillance can foster a culture of constant surveillance, eroding trust and privacy in society.

Freedom of Expression

AI can be used to censor content and block access to websites and apps. This can be used to suppress dissent, control the flow of information, or even promote hate speech. For example, AI could be used to block access to news websites, social media platforms, or even educational resources. For example, the Algorithms and Human Rights European Council noted that YouTube and Facebook had embraced a filtering mechanism for detecting violent extremist content. Notwithstanding, there is no information on the criteria or process embraced to establish the videos showing “clearly illegal content.”

While the initiative could be considered a good step to halt the dissemination of such material, there is a lack of transparency around content moderation, hence eliciting concerns since it can be used for restricting legitimate free speech and encroaching on people’s capacity to express themselves. The same concerns have been raised about automated filtering of user-generated content at the upload point, supposedly violating intellectual property rights (IPRs). In certain circumstances, using AI to disseminate content can significantly affect the freedom of expression. The tension between human rights and technology also manifests in the facial recognition field.

While it can be an influential tool for enabling law enforcement to find suspected criminals, it can also become a weapon for controlling people. It has become increasingly easy for governments to permanently watch people and limit their rights to privacy, press freedom, and freedom of assembly. Increasingly, search engines, social networking websites, and websites utilize AI systems for controlling information that users interface online in order to make suggestions of their preferences, including music selection and online shopping.

Furthermore, AI can affect freedom of expression through content moderation and self-censorship. Content moderation practices, driven by AI algorithms, can create an environment where individuals and groups feel compelled to engage in self-censorship due to the perception of being under constant surveillance. The fear of potential repercussions or punishment for expressing certain ideas or opinions can affect free speech, limiting the diversity of voices and ideas within society. This self-censorship phenomenon can undermine the principles of open discourse and robust public debate that are crucial for a healthy society. Also, AI systems that are not trained in minority groups’ languages or related slang may censor legitimate speech.

Hate Speech, Propaganda, and Disinformation

Online hate speech is broadly acknowledged as a societal issue, but defining what qualifies as hate speech is a difficult task. There is a lack of clear legal criteria for differentiating between hurtful and offensive speech but protected under the freedom of expression. To date, identifying hate speech online and removing thereof by human content moderators has been quite burdensome.

AI can be used to amplify hate speech and to spread misinformation. This can contribute to social unrest and violence. For example, AI could be used to create fake news articles, spread propaganda, or even incite violence. A study released in January 2023 by OpenAI and the Center for Security and Emerging Technology led to the observation that AI’s disruptive potential has made some term them as weapons of mass disruption.

During the last two decades, technological innovation has enabled extremist organizations to access new tools, including sophisticated cameras, editing software, and microphones to generate Hollywood-style propaganda. Terrorist organizations have also been using newer digital media forms for their propaganda that enable targets to interact directly with the content.

Regarding disinformation, certain chatbots, including companion bots, are usually developed with the aim to generate empathetic responses. It makes them appear quite believable. According to OpenAI’s CEO, Sam Altman, there are concerns that these models could be utilized for large-scale disinformation.

Moreover, AI can be used to create deepfakes, which are videos or audio recordings that have been manipulated to make it look or sound like someone is saying or doing something they never actually said or did. This could be used to spread misinformation, damage someone’s reputation, or even interfere in elections. For example, a deepfake could be used to make it look like a politician is saying something they never actually said.
In conclusion, AI can play a role in taming hate speech, but it is increasingly being used for disinformation, propaganda, misinformation, and unfairly targeting individuals.

Right to Equality and Non-Discrimination

AI systems are trained on data that is created by humans, and this data can reflect human biases. This can lead to AI systems that make discriminatory decisions, such as denying people loans or jobs based on their race or gender. For example, an AI system that is trained on data from a biased hiring process is more likely to make discriminatory decisions about whom to hire. In addition, the use of AI in recruitment and hiring has become a common and popular practice by companies recently. Organizations are resolving to use AI-powered tools for screening and selecting job candidates due to benefits, such as increased objectivity and efficiency. These tools raise detrimental concerns about bias and fairness. For example, Amazon’s AI-powered recruitment tool was found to discriminate against women, since the system had been trained on resumes from predominantly male candidates. This case highlighted AI’s potential to perpetuate existing discrimination and biases, and the need to carefully consider their testing to ensure that they do not perpetuate such unfair practices. However, this system disenfranchises minority groups and the right to access credit.

Furthermore, AI can be used to create social credit systems, which are used to track and monitor people’s online behavior, their spending habits, and to reward or punish them accordingly. This could lead to a loss of freedom and autonomy, as people are constantly monitored and judged by AI systems. For example, China’s social credit system seeks to rank organizations, government entities, and citizens by trustworthiness. The accumulated credit score can impact an individual’s possibilities in life and provides benefits to those with a high score.

In addition, banks and other lenders are resorting to AI to develop sophisticated models for credit risk scores. While credit-scoring firms are legally forbidden from considering factors such as ethnicity or race, critics argue that the models have hidden biases against minority and disadvantaged communities, thus limiting their ability to access credit. For example, a study established that minority borrowers and lower-income families are disadvantaged by AI credit-scoring models. The finding was that these predictive tools were 5% to 10% less accurate for such groups than for dominant and higher-income groups. The credit score algorithms are not biased against such disadvantaged borrowers, but the underlying data is less accurate in the prediction of creditworthiness for such groups, mainly due to these borrowers having limited credit histories. Notably, a “thin” credit history lowers one’s score, since lenders prefer more data. Therefore, AI means that banks are working with flawed data for all kinds of historical reasons. For a person with only a single credit card –and who has never had a mortgage – there is much less information that can help lenders to predict whether you will default. If such a person defaulted one time some years ago, it may not predict the future.

Moreover, AI can be used to create black boxes, which are systems that are so complex that it is impossible for humans to understand how they work. This could lead to a loss of transparency and accountability, as people are unable to understand how AI systems make decisions. Today, people often put a lot of faith in these faceless algorithms, but ultimately surrender more of their data to entities that are less enlightened than once believed. For instance, in 2015, software engineer Jacky Alcine highlighted that image recognition algorithms in Google Photos were categorizing his black friends as “gorillas.” In 2021, Google claimed that it had fixed the racist algorithm by completely removing gorillas from its tech. Google’s negligence can be attributed to its inability to understand why the problem occurred. This example demonstrates how algorithms can reinvigorate historical discrimination and reinforce it in society. Being a forerunner in the sphere of AI, the fact that Google was unable to overcome such an issue demonstrates the deep complexities associated with algorithms and machine learning. The coding world has a saying that states “If you input garbage you will get garbage. If you input bias, even if unconsciously, you will output bias at the other end.”

While AI systems claim that they evaluate all persons in the same way to ensure they avoid discrimination, Frank Pasquale argues that human values and prejudices are embedded into every development phase. In addition, the concept of black boxes suggests that it is near-impossible to ascertain whether an algorithm is fair or not since they are basically too complex to understand. AI systems are usually perceived as proprietary information with laws made to protect their owners from sharing their programs’ intricacies. For example, Eric Loomis’ request to review COMPAS’ (law enforcement algorithm) inner working was denied by Wisconsin’s highest court in 2016. The algorithm deemed Loomis as high-risk and the court sentenced him to six years in prison. Loomis also contested that the judge relied on an opaque algorithm, thus violating his right to due process. In addition, a probe of these algorithms by two law professors to understand how scoring is used in criminal justice systems by states established that such information is hidden behind nondisclosure agreements. It shows that such black box algorithms will result in damning sentences and discrimination since they are unaccountable and lack transparency.

Relatedly, AI’s use in law enforcement exacerbates existing forms of discrimination. An example is the use of predictive policing software to predict likely crimes and criminals. While this is a promising technology, it has been increasingly scrutinized for reinforcing existing prejudices and perpetuating biases. For instance, some predictive policing systems such as PredPol unfairly target minority communities, resulting in discrimination and racial profiling.

In addition, facial recognition technology uses algorithms for matching images of individuals’ faces to a database of known persons, enabling law enforcement to not only identify but also track people in real-time. While facial recognition tech can potentially enable law enforcement to solve complex crimes, it raises issues regarding civil liberties and privacy. In some instances, they misidentify people, resulting in false accusations and wrongful arrests.

Overall, as law enforcement agencies embed AI technologies, there is a potential for them to aggravate and perpetuate existing societal injustices and biases. In addition, AI’s use in law enforcement raises questions regarding accountability and transparency. It can be quite difficult to comprehend how such systems make decisions and operate, making it important to develop oversight mechanisms and regulations to ensure its use is ethical and transparent and respects individual freedoms and rights.

Right to Work and Economic Inequalities

AI can automate numerous tasks currently done by humans, resulting in widespread job displacement. This can pose a substantial impact on people’s livelihoods and the economy. The potential for economic disruption and job loss is a key challenge posed by AI technology.

As artificial intelligence systems increasingly become advanced, they can perform tasks previously done only by humans. This can result in economic disruption in certain sectors, the need for people to retrain for new roles, and job displacement. The economic disruption attributed to AI technology can result in workers’ increased financial insecurity. Consequently, it can result in a situation where people are compelled to sacrifice their privacy in order to make their ends meet.

AI further poses a detrimental challenge to an already threatened segment of the workforce; uneducated and low-skilled workers. Based on historical trends and the existing AI capabilities, there is a possibility that the rise of AI will result in the displacement of low-skill and entry-level jobs, leading to a larger dichotomy between unspecialized and specialized workers within modern society. While some new employment opportunities would be created, as happened in the past, they may not be enough to satisfy everyone. For example, many office workers could be prone to downward mobility. The use of AI technology in the hiring process also elicits major concerns. For instance, some organizations use AI algorithms for screening job applicants, analyzing their online behavior or social media activity to make decisions regarding their suitability for a specific role. It elicits concerns regarding the accuracy of the information utilized and privacy since job applicants may be unaware that this data is being collected and utilized in this manner.

Furthermore, the benefits of AI are likely to be concentrated in the hands of a few wealthy individuals and corporations, which could lead to increased economic inequality. This could exacerbate social unrest and conflict. AI has provided various products and services for humanity, making brilliant careers for data scientists and big data architects. Thus, these people get ample opportunities to leap the wealthy. Capitalists also get opportunities for future development through investment and capital operation in the AI fields. The process illustrates a phenomenon whereby the rich individual becomes richer while the poor person becomes poorer. AI makes wealth to be more concentrated. Only large firms have the economic might to invest in AI, making capital the economic foundation for AI’s development. In contrast, the patent rights these large companies will obtain in the future will contribute to more long-term benefits.

Loss of Humanity

As AI becomes more powerful, it is feared that it could eventually surpass human intelligence. This could lead to a loss of humanity, as AI systems make decisions that are in their own best interests, rather than in the best interests of humanity.

Optimists believe that AI is the panacea to many fundamental issues afflicting society, from inequality to corruption to crime. In contrast, pessimists are concerned that AI will overtake our human intelligence and proclaim itself as the world’s king. Underlying the two views is the assumption that AI is smarter than humanity, and will eventually substitute humanity in decision-making. However, AI is not smarter than humans. According to critics, AI’s true threat to humanity does not lie in its power, but in the way people are already starting to utilize it to destroy our humanity. While it outperforms humans, studies have found that this is mainly on low-level tasks. Today’s AI is still constrained to doing specialized tasks, including generating sentences, classifying images, and recognizing patterns. While AI helps humans, it reduces the opportunities humans have to use their intelligence capabilities.

Right to Life and Liberty

AI can adversely affect human life, security, and liberty. High-risk AI systems encompass those employed in critical infrastructure, such as transportation, which have the potential to compromise the health and safety of individuals. Furthermore, they encompass products’ safety components, such as AI applications utilized in robot-aided surgery.

Studies have raised life-threatening issues about the deployment of robotic systems and robot-assisted medical procedures in surgery, robot accidents in manufacturing, security vulnerabilities within smart home hubs and autonomous vehicles and self-driving.

AI can be used to create autonomous weapons that can kill without human intervention. This raises the risk of mass casualties and the proliferation of conflict. For example, AI-powered drones could be used to carry out targeted assassinations or to wage war on a large-scale area, as autonomous weapons and AI-armed drone swarms lead to mass deaths.

Various cases demonstrate how AI adversely affects the right to life, security of persons, and liberty.

  • Case 1: Tesla Fatal Crash

A Tesla car was the first famous self-driving car involved in a fatal crash in May 2016. The collision between the car and the tractor-trailer resulted in the instant death of the passenger/driver, while the tractor driver was not injured. Tesla stated that the car’s sensor system failed to differentiate a trailer and an 18-wheel trick crossing the highway. The Florida Highway Patrol examined the crash and concluded that the Tesla driver did not take evasive action. Similarly, the tractor driver did not give the right of way while turning left. The driver put the car in Tesla’s autopilot mode (an advanced driver assistance system that improves convenience and safety behind the wheel). Despite Tesla clarifying that the autonomous software’s design was to influence consumers to keep their hands on the wheels to ensure they were paying attention, this did not happen in the case and led to a fatality. Other Tesla accidents include an Uber test driver hitting and killing a pedestrian in 2018 and the death of two men in Texas after a Tesla vehicle they were in veered off the road and hit a tree in 2021.

While studies insinuate that self-driving cars are safer than human-driven ones, this case and further examples illustrate human safety challenges. The other standard issues raised concerning self-driving cars are related to security and responsibility.

  • Case 2: Security Vulnerabilities of Smart Home Hubs 

A smart home hub enables the user to engage remotely with the hub using a smartphone. However, smart home hubs are vulnerable to vulnerabilities and attacks. Unauthorized access can threaten human life and health. For instance, a breach or the loss of control of a smoke detector, smart locks, or a thermostat can compromise and endanger human life. Exposure of smart homes to threats and vulnerabilities can facilitate criminal intrusions and actions, posing a danger to people’s lives, properties, and security.

Conclusion

The negative impacts of AI on people’s lives are increasingly being documented. While the focus has mainly been on jobs, employment, health, and education, it is important to also examine the negative impacts of AI on human rights.

AI continues to redefine the meaning of ‘humanness.’ However, human rights have been largely disregarded in AI’s governance.

The human rights to data protection and privacy, freedom of expression, information, non-discrimination, work, life, and liberty are important for ensuring that AI’s governance benefits all persons. Human rights treaties and principles impose duties on governments to protect and organizations to comply.

Overall, international organizations, civil society, investors and companies, and governments should embark on ensuring that human rights act as the basis for AI development and governance. Some efforts could include championing human rights, establishing relevant processes and standards for human rights law implementation, and inclusive discussion. Remedies should also be put in place in case of breaches.