The importance of protecting vulnerable groups in AI systems
Special protections should be put in place for vulnerable groups, including children, persons with disabilities, and elderly individuals, to ensure that AI systems do not harm their rights and interests. Protecting vulnerable groups is important to ensure that security concerns about the exposure of their personal information are addressed. It is also crucial for helping vulnerable groups to track their personal data and make better, more informed choices in tandem with their objectives.
How AI systems can harm the rights and interests of vulnerable groups
While vulnerability definitions are generally fragmented, vulnerability mainly refers to the state or quality of being exposed to the likelihood of being harmed or attacked, either emotionally or physically. According to Mannan et al. (2012), there are 12 types of vulnerable groups, including impoverished people, people with heightened risk for morbidity, women-headed households, children (especially with special needs), the elderly, youth, living away from health services, ethnic minorities, displaced populations (such as refugees and migrants), those suffering from chronic illness, and the disabled.
Depending on AI to collect personal data of these groups can result in exclusion from protection. This is because such groups may lack the capacity to understand how their data is being utilized, anticipate how this might impact them, and protect themselves against any undesired consequences. For example, LGBTQ+ individuals might find themselves adversely impacted by systems that perpetuate or allow such discrimination or profiling.
Vulnerability is determined by various factors, including physical/technical, social, political, regulatory, and economic. AI facial recognition technology can misgender certain groups and discriminate against non-binary persons. For example, AI approaches/algorithms are founded on a male-female gender binary, and physical traits are used for determining one’s gender. Thus, such systems can lead to misgendering of trans people, whereas non-binary individuals are compelled into a binary undermining their gender identities.
Examples of harmful AI systems targeting vulnerable groups
Allegheny Family Screening Tool (AFST): This is a predictive model deployed for forecasting child abuse and neglect. According to the UNESCO COMEST Preliminary Study on the Ethics of Artificial Intelligence, the AFST aggravates existing structural discrimination against the poor and disproportionately affects vulnerable communities by oversampling the poor and using proxies to comprehend and predict child abuse in a way that disenfranchises poor working families. The underlying algorithms behind the AFST have increasingly been scrutinized over their opaqueness, considering the longstanding gender, class, and racial-based biases of predictive AI tools.
Gaggle Student Surveillance Tool: This is a digital monitoring company that supports student well-being and safety. The company used keywords that put LGTBQ+ students at a higher risk of scrutiny by school officials, contributing to bias against them.
Recommendations for ensuring the protection of vulnerable groups in AI systems
Three more actions can be implemented to protect the 12 identified vulnerable groups
- Minimize the adverse effects of AI using continuous risk identification, preparation, and prediction in liaison with impacted stakeholders, including an adequate representation of the vulnerable. It is important to do this during the early research, design, and development stages of AI. This action targets all actors within the AI ecosystem (including the research funders, policy-makers, researchers, deployers, users, and developers).
- Capacity building of vulnerable communities to enhance resilience. This action is addressed to public policy-makers at national and international.
- Address the root causes of the vulnerabilities, including harder regulatory and policy stances on the discrimination, injustice, inequality, and harm fueled by such AI technologies. This action targets regulators at all levels.