Human Rights Impact Assessments

The importance of human rights impact assessments in the development and use of AI systems

AI developers and users should conduct human rights impact assessments (HRIA) to identify and mitigate potential risks and negative impacts. As such, HRIA of AI systems needs a meaningful engagement of individuals who are most impacted. To address this need, the European Center for Not-for-profit Law has developed a framework for meaningful engagement, which includes shared purpose, trustworthy process, and visible impact.

This framework emerged out of the need for helpful tools, a clear process, and a standard setting to ensure the engagement becomes meaningful. Its aim is to ensure that developers and those engaged feel that they have collaboratively created concrete results. The assessment should include diverse lived experiences, voices, and disciplines from various external stakeholders, and the framework emphasizes the importance of engaging those most vulnerable to AI harm.

Globally, tech firms and governments alike dub AI and automated decision-making forms as convenient, cheap, and fast fixes for various social challenges – including the detection of fraud, moderation of illegal social media content, and even the tracking down of tax evaders. Notwithstanding, scandals over the misuse and abuse of AI systems continue to pile up.

Despite being projected as the silver-bullet solution to the complex issues of illegal content online, automated content moderation systems have been found to be limited, flawed, and vulnerable to errors. In addition, these dangers are not constrained to the digital spaces. For example, tax authorities implemented an algorithm in the Netherlands to detect benefits fraud. However, in doing so, it falsely accused and penalized many people, with many being from low-income backgrounds or minorities.

Basically, carrying out assessments using the usual methods will miss the mark for algorithmic and AI systems, as illustrated by the failures of Facebook’s HRIA in Myanmar. The company commissioned an HRIA after the UN investigators had established that there was genocide in Myanmar. Despite this, the HRIA failed to adequately assess the most salient HRIA effects of the presence of the company and its products in the country.

Thus, assessments should be conducted throughout the entire AI development and deployment process. With more jurisdictions mandating HRIAs for AI systems, the following recommendations should guide the process.

Recommendations for conducting human rights impact assessments in the development and use of AI systems

Civil Society and Impacted Groups: Input from civil society and those impacted should be prioritized in conducting HRIA. There is a need for more meaningful and resourced participation of affected groups and civil society within organizations empowered to conduct audits and assessments and in standardization bodies, as well as meaningful public disclosure of audit and assessment results.

Oversight Mechanisms: It is important to develop oversight mechanisms in case self-assessments fail to safeguard people. These mechanisms should trigger independent assessments and audits, and provide clear avenues for individuals impacted by AI systems to flag harms.

Collaboratively develop a human-rights AI risk assessment: Liaising with all the relevant stakeholders, the authorities should create a model risk assessment methodology that explicitly tackles the human rights concerns that AI systems raise.