Non-discrimination

The importance of non-discrimination in AI systems

AI systems should be developed and used in a way that does not discriminate against individuals or groups based on their race, ethnicity, gender, religion, or other characteristics. Fair and non-discriminatory AI helps ensure that AI systems are equitable and just, treating all persons fairly and without any form of bias. It is particularly pertinent within industries where AI systems are utilized to make decisions that have a crucial effect on the lives of people, including promotion and hiring processes or the criminal justice system. Non-discrimination in AI systems is also important for promoting trust and increasing the adoption of these systems. When people believe that AI systems are discriminatory and biased, they have less likelihood of trusting them and are likelier to resist their adoption. Finally, it is important for moral and ethical reasons. AI systems should not amplify or perpetuate existing discrimination or biases but rather strive to foster justice and equality.

How AI systems can perpetuate discrimination

Discrimination attributed to AI systems can occur intentionally and unintentionally. Intentional discrimination occurs when conditions or rules with a discriminatory outcome are purposefully embedded into an algorithm. An example is a rule that can automatically reject women’s loan applications. Despite this, in numerous instances of discriminatory AI systems, the developers do not set out with that intention. Instead, discrimination inadvertently results from the development process. This can occur in two major ways. The first way is imbalanced training data, where a certain group of individuals are underrepresented in a training data set. This can lead to discrimination because the algorithm lacks adequate data regarding them to make accurate decisions. Over-representation of one section of society in Machine Learning algorithms makes the model pay more attention to statistical relationships that predict that group’s outcomes, and less to the other patterns predicting outcomes for other groups, e.g. men. Secondly, if the training data reflects past discrimination, it means that the algorithm will discriminate in the same way. It is quite problematic in places where discrimination has historically been a problem, including policing or recruitment.

Examples of discriminatory AI systems

Discrimination against women by Amazon’s algorithm: Amazon’s automated recruitment system aimed to evaluate applicants premised on their suitability for a number of roles. The system learned how to decide if someone was suitable for a role by examining resumes from a number of previous candidates. Unfortunately, the process became biased against women. Since women were previously underrepresented within technical roles, the AI system believed that male applicants were consciously preferred. Thus, female applicants’ resumes were punished with a lower rating.

COMPAS Race Bias: AI also reflects race bias. For example, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) used AI to make a prediction about US criminals who had a higher likelihood of reoffending. An investigation done by ProPublica in 2016 established that the COMPAS system was likely to find Black defendants being riskier of reoffending compared to their white counterparts.

US Healthcare Algorithm: A US healthcare algorithm underestimated the needs of Black patients, demonstrating how AI can also depict racial prejudices within healthcare. Used for more than 200 million individuals, the algorithm was designed to predict patients who required extra medical attention. Assuming that cost indicated the healthcare needs of a person, it analyzed their healthcare cost history. However, this assumption failed to explain the various ways white and Black patients paid for healthcare. According to a 2019 Science paper, Black patients have a higher likelihood of paying for active interventions, such as emergency hospital visits, despite exhibiting signs of uncontrolled illnesses. Thus, Black patients were awarded lower risk scores, were viewed as equal to healthier white persons in terms of costs, and failed to qualify for extra care like white patients with similar needs.

Recommendations for preventing discrimination in AI systems

Conducting an audit or assessment of AI systems is among the best ways of combating discrimination. Impact assessments can play a pivotal role in combating bias. Besides the Data Protection Impact Assessments (DPIAs) and the Algorithm Impact Assessments (AIAs), bias audits can be a great technique for assessing AI systems that are currently in use. Rather than examine the algorithm, they conduct a comparison of the data fed into the system and the output. Also called ‘black box testing,’ these audits can be done in three ways: scaping audit, sock puppet audit, and collaborative/crowdsourced audit. The scraping audit entails an auditor writing a program to make various requests to a website and keenly observing the results. On the other hand, the sock puppet unit involves the creation of fake user accounts to observe the system’s operation. Finally, the collaborative or crowdsourced audit functions like a sock puppet audit, but entails the recruitment of real people to create accounts. An example is Who Targets Me, which is a free browser extension that allows users to track political ads on Facebook using a crowdsourced global database of political adverts placed on social media.

Having clear procedures and policies for data testing and the procurement of high-quality training is also quite crucial to safeguarding against an AI system’s discrimination. It is important for organizations to be satisfied that the collected or procured data represents the population that the system will operate on. In addition, the system’s life cycle should include checks and key performance metrics to ensure that the system continues to yield fair results.

Furthermore, using techniques for improving AI systems’ transparency is important. It can be achieved by explaining AI system decisions and ensuring that AI system algorithms are more understandable and interpretable. Finally, all stakeholders from diverse communities should be involved in developing and deploying AI systems. It can ensure that the different groups’ needs and perspectives are considered and that AI systems are likelier to be non-discriminatory and fair.