Redress and Remedy

The importance of redress and remedy in AI systems

Mechanisms should be in place to ensure that individuals and communities have access to effective redress and remedy for harms caused by AI systems. Permitting private redress mechanisms against companies helps to ensure that there is effective enforcement of rules by providing people with the power and right to challenge and monitor negative or harmful conduct. Providing redress opportunities is also important since AI technologies can make the wrong decisions, even when they function as designed, especially since they usually operate at a larger scale compared to human systems.

In addition, redress mechanisms can ensure accountability by allowing consumer organizations and consumers to submit challenges and complaints to AI systems. Such mechanisms can also be critical for empowering regulators to collate and identify trends in negative AI system impacts.

Such information can help to create awareness regarding common issues among AI system users and offer learning opportunities for continued AI developments. With this in mind, it is important for AI regulatory frameworks to establish redress mechanisms that can address the harms that emanate from AI systems in a consumer-centric and effective way. Failing to do so can aggravate issues of exclusion and inequality for some demographic individuals and groups.

Description of how lack of redress and remedy can harm individuals and communities

The lack of remedy and redress makes AI systems pose negative and harmful effects on various vulnerable or marginalized individuals, communities, and groups. The failure to put in place private rights of action implies that people cannot seek specific recourse against corporations that have deployed or developed AI systems that have had a negative impact on them in some way. This violates the right to compensation that guarantees every person who has suffered non-material and material damage due to infringement.

The lack of remedy also leads to AI systems making decisions that are entirely or somewhat unexplainable. Unexplainable AI systems forbid effective human monitoring and oversight. Even if the decisions are accurate, they may neither be understandable nor traceable. The lack of understanding of an AI system’s decision-making can affect the impacted system users’ capacity to identify the party responsible for the harm. Finally, the lack of redress and remedy mechanisms can exacerbate systemic inequities that already exist.

Examples of harmful outcomes resulting from lack of redress and remedy in AI systems

Los Angeles Police Department’s LASER: This LASER algorithm identifies areas where gun violence is more likely to occur. However, LASER failed to incorporate redress and remedy measures that would have eliminated bias. Because of this, LASER was shut down in 2018 after the inspector general of LAPD released an internal audit demonstrating substantial issues with the program. For example, it found inconsistencies in the criteria used to select and keep individuals in the system.

Chicago Department’s Heat List or Strategic Subjects List: The program came up with a list of individuals it considered most likely to commit gun violence or be a victim of it. Developed by the Illinois Institute of Technology researchers, the algorithm developers believed that epidemiological models used for tracing the spread of diseases can be utilized for understanding gun violence. While the Chicago police often touted it as the key to their strategy to combat violent crime, an analysis of the program’s early version by the RAND Institute established that it was ineffective. The lack of redress measures made it to be narrowly targeted. Civil rights groups also claimed that the program targeted communities of color. In addition, a report released by Chicago’s Office of the Inspector General established that it depended on arrest records to identify risks even in cases where there were no further arrests. Thus, this led to the shelving of the program in January 2020.

Recommendations for ensuring access to effective redress and remedy for harms caused by AI systems

Regulators:

  • Ensure that persons harmed by AI systems can make regulatory complaints or legal action within courts.
  • Empower civil society organizations to represent consumers in redress against companies that use harmful AI systems.
  • Establish an AI ombudsman service to investigate and resolve complaints and issues raised in an impartial and independent manner.
  • Empower communities or groups of people who have experienced widespread or systemic harm from the deployment and/or development of AI to collectively look for redress for such harms.

Companies:

  • Engage with external stakeholders, including consumer advocacy groups and academic researchers to identify and tackle issues of unfairness, bias, and discrimination that may be in AI models.
  • Create internal ombudsman services for receiving and reviewing stakeholder complaints.

Civil Society:

  • Ensure findings from engagements with research, communities, and audits are publicly availed.
  • Engage with marginalized or underserved communities or individuals to identify harmful effects and seek redress.