The importance of inclusivity in AI systems
AI should be developed and used in a way that is inclusive and accessible to all, including marginalized and disadvantaged communities. Inclusivity holds great significance since it ensures that when AI systems make decisions, they are the right and accurate decisions. Adherence to practices of inclusion in the development, deployment, and use of AI systems helps to cure digital redlining, algorithmic oppression, and equality, making AI systems to be perceived as fair and trustworthy.
How AI systems can exclude marginalized and disadvantaged communities
In a world dominated by bias, achieving true neutrality is nearly impossible. This is true for AI that does not have an inherent concept of bias. Since we live in a world where systemic inequality and racism are prevalent, the AI-powered machines created by humans are engrained with the same biases that perpetuate discriminatory and racist notions of thinking. In How Artificial Intelligence Can Deepen Racial and Economic Inequities, Olga Akselrod argues that instead of helping to eliminate discriminatory practices, AI has aggravated them, impeding marginalized groups’ economic security. For example, in A Move for Algorithmic Reparation Calls for Racial Justice in AI, Khari Johnson demonstrates how algorithms used for screening mortgage applicants and screen apartment renters disproportionately exclude disadvantaged communities because of the historical segregation patterns that have poisoned the data on which numerous algorithms are developed. In addition, this bias is also evident in healthcare where technological advancement meant to benefit all patients may have aggravated healthcare disparities for people of color, implying that the racial and economic divide will only deepen. Every sector that deploys AI is impacted by this bias and should do the work to ensure that they are aware of how it impacts communities of color and other disadvantaged communities.
Examples of exclusionary AI systems
An example is the Google Photos algorithm that excludes Black people. Google Photos has a labeling feature that enables the platform to add a label to a photo that corresponds to what is shown within the picture. A Convolutional Neural Network (CNN) trained on many images in supervised learning does this and then deploys image recognition for tagging the photo. Notwithstanding, a Black software developer and his friend found the Google algorithm to be racist when it labeled their photos as gorillas. The company acknowledged this mistake and promised that it would mitigate it in the future. However, all the company did two years later was get rid of gorillas and other types of monkeys from CNN’s vocabulary to deter it from identifying any photo that way. Despite these efforts, this was only a temporary solution since it did not solve the underlying issue.
Recommendations for ensuring the inclusivity of AI systems
Adopting transformative justice principles can ensure that AI includes impacted communities in conversations regarding how to develop and design AI models. Experts advocate for inclusion that has a positive impact on the algorithms. AI technology users and makers should also start to purposefully and actively ask marginalized individuals to share their experiences and expertise on all levels.
According to Andrew Burt, a potential solution to ensuring inclusivity is by first looking at various statistical and legal precedents for ensuring algorithmic inclusivity. It implies looking at laws in other areas such as health care, employment, housing, and civil rights to understand how these sectors have tried to address discrimination issues. Thus, he believes that businesses and industries should carefully monitor and document all their efforts to minimize algorithmic exclusion, and generate rationales for using the models eventually deployed.
Ultimately, the best solution is inviting impacted communities into the discourse around developing and designing equitable artificial intelligence models.
A more comprehensive guide is offered by Safaa Khan’s (2022) article on How can AI support diversity, equity and inclusion?, which provides four ways to ensure that AI is inclusive.
- Diversity is needed in the whole AI lifecycle from ideation to design, development, deployment, and post-launch monitoring. The important aspect to note is that ensuring the inclusivity of AI systems needs a full mindset shift in the development process.
- Transparency should be in the design/ideation stage and when selecting the right capital and investments for projects. Openness about what is being designed, and more crucially for whom, and its effects is important for any new technology.
- Awareness creation, capacity building & education: Equipping and teaching underrepresented communities with the skills and tools to comprehend and work in the AI space is important.
- Advocacy: It is important to support and follow the work of individuals and organizations within the space, such as Black in AI. It has been instrumental in removing barriers Black people face globally in the AI field.