How Can AI Applications Be Biased

Share this:

Share on facebook
Facebook
Share on twitter
Twitter

 

In this Page:

Artificial intelligence (AI), contributes to such a huge evolution in all fields. However, one of the most important problems arising in the last few years was the discrimination practiced by some of the AI applications based on gender or race, to the extent that such applications violate the rights of individuals and control the course of their lives. Masaar – Technology and Law Community, has published two episodes of CONNECT podcast discussing the bias and racism of some of the AI applications and algorithms. To listen to them:

How AI works?

AI technologies allow computers to imitate human intelligence. AI has its main methodologies and theories, e.g. human and rational thinking and behavior. Generally, AI works through collecting a large quantity of data and feeding them into it, to be processed using smart algorithms, which allows the program of the machine to learn the patterns and the characteristics of the data and information extracted from them. This learning is used later by the machine to develop its ability to deal with the task it is used to accomplish. For example, a chat application based on AI can be fed with samples of textual chats, thus it learns how to produce human-like responses to the persons chatting with it. Another example, a photo recognition tool can learn to recognize people or things in photos and describe them. AI has many theories and methodologies, some of the most wide spread fields are:

  • Machine Learning or learning by experimenting. It provides computer systems with the ability to learn and develop on their own through experience. This happens by developing algorithms capable of data analysis and guessing based on data processing. Machine learning is used on a large scale in many fields like health-care and pharmacology, up to entertainment applications like Netflix recommendations for the user.
  • Deep Learning or self-learning, where AI applications depend on artificial neural networks, and algorithms are developed so that the machine can teach itself through imitating human neurons in data processing and making decisions. The machine behavior becomes similar to human action. Siri and Alexa are two of the applications that use deep learning.
  • Neural Networks: As mentioned above, they are used in deep learning. They are computing systems based on neural networks in human brain. These system learn through analysis of huge quantity of data several times to find and determine patterns and connections not previously determined. The machine becomes capable of extracting one conclusion out of several inputs. For example, training the machine to recognize a specific object in photos by inputting a large number of photos in which the object exists, so the machine is able to recognize the existence of it in other photos.
  • Knowledge Computing: seeks to reconstruct and imitate the process of human thinking in a computer model, like understanding human language or the meaning of photos. Knowledge computing and AI work together to provide machines with human-like behaviors and human capabilities for information processing, enhancing and developing the interaction between humans and machines.
  • Natural Languages Processing: where a machine is capable of processing, interpreting, and using human natural languages, so it is able to interact with humans while understanding the context and producing reasonable responses. As example Skype’s translator which interprets speech in several languages in actual time to facilitate communications.
  • Computer Vision, a technology that uses deep learning to determine, process, and interpret visual data including photos, charts, spreadsheets, and PDF documents besides texts and videos.

How AI can be biased?

AI bias is caused basically by biased data it is trained with. Also, the presence of racist assumptions and the reflection of society’s racism problems in AI algorithms make it behave in a way reflecting intolerance and bias existing in society.

Amazon employees recruitment crisis

One of the famous examples of AI bias was the AI system built by Amazon in 2014 to help recruitment in the company, only to find out that the new system discriminated against woman.

Amazon formed a team of programmers to work on the automated process of employees recruitment. The team fed the AI system with a large set of CVs presented to Amazon throughout 10 years. It was supposed that the system -based on the data it was fed with- would recommend and choose the best candidates for new jobs.

A year later the engineers noticed that the system discriminated against women. After reviewing the CVs fed into the system to train it for choosing the most suitable for the jobs, the company found out that most of the CVs were of men. Accordingly the AI system assigned low values to CVs with female names, or other data like practicing a specific sport.

Assessment of criminal dangers

Another example of AI bias occurred in the same year (2014), when an American girl named Brecia Bordon and her friend found a bicycle and a scooter unlocked in the street and used them. The 18 years old girls realized that the bicycle and the scooter were too small so they left them on the street and went away. At that moment a woman came running after them “these are my kid’s stuff”, she shouted. One of the neighbors who witnessed the incident has already informed the police. Bercia and her friend were arrested and charged with robbery and stealing 80$ worth items. Brecia had a criminal record related to misdemeanors she committed when she was a juvenile.

In the year preceding this incident, the police arrested a 41 years old person named Vernon Brater, for shoplifting 86.35$ worth tools. This was not the first crime for Brater as he was convicted of attempted armed robbery and spent five years in jail.

During trials an AI systems were used for what is called a Danger Assessment for both of them, predicting the probability of their committing of crimes in future, and also to decide on releasing defendants in one of the criminal justice stages. Bercia, who was Black, was categorized by the system as highly dangerous, while Vernon, a white, was categorized as low danger. Two years after the trials, Bercia was not charged with any new crimes, while Vernon was spending an 8 years sentence for breaking into a store and stealing electronic devices worth thousands of dollars.

AI sytem’s bias against Bercia and other Black persons came as a result of the biased data fed into it, which states that they are more likely to commit crimes, so the rate of their categorization as more dangerous is twice the rate of categorizing white people, considered low danger. AI bias occurs when the data itself is biased. AI bias are dangerous because they help worsening the discrimination based on gender and reinforcing sexual and racial stereotypes, which might be reflected in work market, education, internet ads, social media, taxes, and justice system. So, there is a need for testing and assessing AI systems and feeding them with balanced data for less bias and for enhancing the accuracy of the systems.

Share this:

Share on facebook
Facebook
Share on twitter
Twitter