Proportionality

The importance of proportionality in the development and use of AI systems

The proportionality principle in international law stipulates that an action’s legality shall be determined based on the respect of the balance between the objective and the methods and means utilized and the action’s consequences. Proportionality in AI systems involves a set of conditions that should be satisfied in order to justify the use of AI.

Proportionality helps to safeguard data privacy by ensuring that information processing is adequate, necessary, suitable, relevant, and does not go beyond a specified and declared purpose. This means that proportionality in AI systems ensures that the use and collection of personal data safeguards the data collected. In other words, the development and use of AI should be proportionate to its intended purpose, and should not result in unnecessary harm or intrusion.

Description of how disproportionate AI systems can harm individuals and communities

Structural, institutional, and systemic racism is prominent across numerous spheres of public life in both international and national settings, including access to justice, public services, and the enjoyment of social and political rights. AI and facial recognition technologies play a detrimental role in sustaining such forms of racism.

AI algorithms have particularly been blamed for their discriminatory effects, racist implications, and racist digital profiling. These technologies treat certain communities and individuals as being of lower status within society.

Example of harmful outcomes resulting from disproportionate AI systems

Optum: This example highlights the disproportionate AI systems within public health that pose detrimental implications for people of color. Healthcare providers in the US often try to limit their exposure to high costs of healthcare by embracing ‘complex care management’ systems that channel resources to “high-cost beneficiaries.” Owned by United Health Group, Optum is an algorithmic service that seeks to streamline the process involved in identifying beneficiaries. Evidence insinuates that Optum systematically fails to refer to people of color to get benefits from support programs compared to white people. The rationale for its failure is that its algorithm was trained to predict spending behaviors instead of hospitalizations, disenfranchising people of color since they are less inclined to seek medical care when ill.

Recommendations for ensuring proportionality in the development and use of AI systems

Appropriate Data Governance: AI system providers should establish appropriate data management and governance practices and should utilize representative, relevant, and complete datasets. Compliance with the proportionality rules should also be illustrated through technical documentation and conformity assessments consisting of the AI system’s general description, its major elements (such as data analysis and validation) as well as information regarding its operation, such as the accuracy metrics.

Post-Market Monitoring System: After the sale or implementation of a high-risk system, providers should develop a proportionate post-market monitoring system with the core mandate of collecting data on the operation of the system to ensure that it continuously complies with the regulation and takes corrections if required. Systems that continue learning after their implementation require new compliance assessments in case of substantial modifications from learning.

Oversight Mechanisms: High-risk AI systems should be designed to enable users to oversight or oversee them to deter or reduce potential harms or risks. It is important that design features enable human users to refrain from over-relying on system outputs – as this is tantamount to automation bias – and should allow designated persons to monitor so that they can override system outputs and use the stop button.