Data Quality and Bias

The importance of using high-quality and unbiased data in AI systems

AI systems should be designed to use high-quality and unbiased data, in order to prevent biases and discriminatory outcomes. High-quality and complete data is representative and reduces the chances of bias. An AI system is simply as good as the quality of the input data. Thus, cleaning training datasets from unconscious and conscious assumptions on gender, race, or other ideological concepts can enable an organization to develop an AI system that makes unbiased data-driven decisions. In addition, unbiased data ensures that AI applications operate in a fair and objective way, which results in inclusive applications and leaves no one behind. For example, AI systems should not favor any user group over the other and should avoid making decisions that cannot be sufficiently justified by humans.

How biased data can lead to discriminatory outcomes

Bias can be attributed to a plethora of factors, including the lack of representative data or an AI system’s repurposing for use in an application context different compared to the context where the system was trained at first. It is worth noting that human intelligence is prone to various forms of biases, including the placebo bias and choice-supportive bias. Similarly, AI systems are not different from humans in this respect. When trained in non-representative or biased data contexts, they are likely to result in subjective decisions and choices. A key concern with AI bias is that it is predominantly unintended. This implies that numerous AI experts and data scientists develop biased systems without understanding their issues and the repercussions of their use. Biased systems can be categorized into two broad categories: data and societal.

Data-biased systems entail AI algorithms becoming biased since they are trained using non-representative data. This contributes to biased systems with discriminatory and wrong decision-making. On the other hand, societal AI-biased systems entail the development of AI techniques in ways that embed existing biases within their decision-making, basically because their development is premised on legacy-biased systems. Thus, biased systems are unintentionally created in various ways:

  • Historical bias: large historical datasets, comprising biased decisions, used for developing AI systems can result in historical bias. For example, training a recruitment algorithm for senior managers within tech firms through past hiring data can lead to a gender-biased machine learning system. This is due to the fact that tech enterprises favor male over female candidates.
  • Representation bias: data that disregards whole population segments used for training AI systems leads to representation bias. For example, using data from social media and city apps to train smart city systems for providing services to citizens leads to algorithms that do not consider the needs of low-income citizens. These groups are not active Internet app users; thus, they are likely to be underrepresented within the data sets collected by AI systems.
  • Aggregation bias: data that aggregates datasets from various population groups and sources used for training AI systems can lead to aggregation bias. For instance, AI algorithms for disease prognosis and diagnosis can be trained over various datasets from Asian, European, and US citizen databases. It is common for larger datasets that can train deep neural networks. The developed AI system is then utilized for prognosis or diagnosis over any group, but the outcomes will be biased towards the majority group within the aggregated dataset.
  • Purpose/deployment bias: A system developed and trained for a certain purpose, but utilized for another purpose can contribute to deployment bias. An example is the use of an AI system trained to predict a prisoner’s future behavior to evaluate whether it is appropriate to reduce their sentence a few years later. This leads to purpose or deployment bias since the system’s design and development did not factor in this later use.

Examples of biased data leading to discriminatory outcomes

Predictive policing or PredPol algorithm: This is an example of biased data that results in discriminatory outcomes against minorities. It aims to predict where crimes will happen in the future premised on the crime data gathered by the police, including the number of police calls in a location and the arrest counts. The US police departments in Maryland, California, and Maryland already use this algorithm to minimize human bias within the police department by leaving the crime prediction to AI. However, USA researchers discovered the bias inherent in PredPol, as it sent police officers to specific neighborhoods that have many racial minorities irrespective of the number of crimes that occurred in this area. It was due to a feedback loop within PredPol where the algorithm predicted more crimes within regions where there were more police reports. Contrastingly, there are chances that the high number of police reports made in these areas was due to the concentration of police officers there, perhaps because of the existing human bias.

IDEMIA’S Facial Recognition Algorithm: IDEMIA’S is an organization that develops facial recognition algorithms utilized by law enforcement in France, USA, and Australia. This facial recognition system analyzes approximately 30 million mugshots in the US to check if anyone is a criminal or endangers society. The algorithm was checked by the National Institute of Standards and Technology, leading to the finding that it made substantial errors in the identification of Black women more than white women. The association established that IDEMIA’s algorithms had 10 times more false matches for African-American women. Facial recognition algorithms are generally considered acceptable if their false match rate is about one out of 10,000, but the false match found for African-American women was higher.

Recommendations for ensuring the use of high-quality and unbiased data in AI systems

Deploy Processes for detecting biases: Organizations should deploy systematic bias detection processes during the system design and development phases.

Mitigation and Removal of Biases: After the detection of bias, enterprises should specify the steps that should be undertaken to mitigate or eliminate the bias. Thus, a clear and well-thought-out mitigation process should be specified and executed. An example can be the collection and integration of more data, removal of data aggregations, or improvement of the frequency and quality measurement taken by some instrument.

Regulatory Compliance: Companies should monitor regulations that aim to ensure AI systems operate in a reliable, human-centric, and trustworthy way. This is the case in Europe where there is a proposal for an AI Act in the European Parliament.

AI Audits: It should be possible to do external audits on AI systems’ security, reliability, and trustworthiness. Notably, such audits will be important within high-risk environments, where human lives or even financial assets are at risk. In addition, external audits can disclose possible biases and recommend AI techniques for mitigating/addressing them.