The importance of fairness in AI systems
AI systems should be designed and used in a way that promotes fairness and prevents bias and discrimination. Fairness in AI helps to ensure that AI models do not discriminate when making decisions, especially with respect to attributes such as country or origin, race, or gender. Thus, the idea of fairness helps to ensure that an AI system does not contribute to discrimination or unfair decisions.
Fairness scores are important for machine learning researchers who are optimizing AI systems in order for them to improve their ratings and make them fair. Moreover, respecting the concept of fairness is important for an AI system from both a legal and ethical perspective. The potential harm scenario addressed by fairness is discrimination against a specific group of people using sensitive characteristics.
Description of how biases can be introduced into AI systems
Biased data can unknowingly contribute to unfairness. Essentially, machine learning systems are trained on datasets that may be unfair because of a number of factors, including how the data was collected. For example, a recent study established that the mortality risk predictive models deployed in clinical care in a hospital or region could not be generalized to other populations. The study also found that the models performed differently for people from different racial groups, and this difference was due to the data used to train the models.
This implies that if data is corrupted and vulnerable to unfairness and noise, not much can be done to make the AI system trained on it fair. It is an example of the ‘garbage in, garbage out’ concept used in machine learning. The unfortunate thing is that modern AI algorithms usually preserve and amplify the unfairness that exists in the data.
Examples of biased AI systems
Tay Chatbot: In 2016, Microsoft attempted to showcase a chatbot called Tay on Twitter. Microsoft launched Tay with the intention for it to learn from its playful and casual conversations with other app users. Initially, Microsoft emphasized how relevant public data would be cleaned, modeled and filtered. In contrast, the chatbot was sharing transphobic and racist tweets within 24 hours. The chatbot had learned discriminatory behavior from its engagement with users, a large number of who were feeding it with inflammatory messages.
Facebook’s Ad Algorithm: Facebook was found to have contravened the US Constitution by allowing advertisers to use the platform to deliberately target adverts according to religion, gender, and race (these classes are protected by the US legal system). Job advertisements for roles in secretarial or nursing work were suggested mainly to women, while ads for taxi drivers and janitors were shown to many men, especially those from minority backgrounds. Experts deciphered that the algorithm learned that real estate ads were more likely to achieve better engagement statistics when shown to white people, meaning that they were no longer shown to people from minority groups. This example showcases how bias occurs in an AI machine.
Recommendations for promoting fairness in AI systems
Focusing on data is critical. For instance, examining data provenance and its reproducibility can be significant. Also, focusing on ensuring that models are transparent and explainable can help make the variables involved in decision-making clearer and fairer.
Fairness should be treated as a cooperative act. Thus, organizations should invest in ethically developing their managers. The Harvard Business Review asserts that managers need to be able to challenge algorithmic decisions and develop their intuitive feel and common sense for what is wrong and right.
Furthermore, AI fairness should be regarded as a negotiation between humanity and utility. To achieve this, leaders should be clear regarding the values the company wants to pursue and the moral norms they would like to see at work. Thus, they should clarify the reasons and how they want to do business.
Finally, an organization’s data scientists should understand and consent to the moral norms and values leadership has established. In many organizations, there is a gap between what is being built by data scientists and the business outcomes and values organizational leaders yearn to achieve. Ideally, the two groups should collaborate to understand the values that cannot be sacrificed in the deployment of algorithms.