The importance of accountability in AI systems
Those responsible for the development and deployment of AI should be accountable for its impact on society, and mechanisms should be in place to hold them accountable, including any harm caused to individuals or communities. Accountability is important for ensuring that developers, deployers, and designers comply with legislation and standards to ensure the appropriate functioning of AI in their lifecycles.
Accountability acts as a cornerstone of the governance of AI. Current AI policies, particularly within the context of Europe, acknowledge that to use the help of or delegate decisions to AI, it is important to ensure that these systems are fair in their effect on the lives of people. They should also be in tandem with values that should not be compromised and can act accordingly. Accountability is also important for ensuring compliance with legal and ethical standards. In addition, accountability is critical for oversight – examining information, obtaining evidence, and evaluating the conduct/behavior of AI systems. Finally, it helps in enforcement since it determines the consequences that an AI developer should bear, including prohibitions and authorizations in tandem with the evidence collected during the oversight and the report.
Description of how lack of accountability can lead to harmful outcomes
The lack of accountability can result in brand damage, lost customers, and regulatory fines. The highest risk is where a system error results in negative repercussions. For instance, using AI for credit determination, medical diagnosis, and criminal sentencing are all areas where AI errors can pose severe consequences. The lack of accountability also makes AI vulnerable to bias since AI learns from humans.
It is also worth noting that software testing processes are no longer applicable because AI models learn patterns and behaviors from vast amounts of data, making it difficult to anticipate and test for all possible scenarios.
Failing to have accountability in AI systems makes it challenging for companies to make their systems predictable and accurate. Finally, the lack of accountability leads to reinforced biases, ‘echo chambers,’ and infringes on privacy while lacking transparency and public scrutiny.
Examples of harmful outcomes resulting from lack of accountability in AI systems
Northpointe’s COMPAS is a web-based tool that uses an algorithm to assess the likelihood of a defendant becoming a recidivist. Northpointe’s computer scientists and researchers believed that their system met an accepted definition in the field of statistics. However, in 2016, ProPublica analyzed the use of the system in Broward County Florida, revealing that while the system played a critical role in predicting recidivism equally well for Black and white defendants, it made various forms of systematic mistakes for the two populations. It particularly demonstrated that the system was likelier to mistakenly predict African-American defendants as being high-risk and the contrary mistake for the white defendants.
It meant that the law would harshly treat Black defendants who would never recidivate whereas the algorithm treated white defendants who would recidivate more leniently. Thus, ProPublica concluded that this clearly demonstrated algorithmic bias. Northpointe never accounted for or satisfactorily responded to this issue and its system continues to be used in the courts without any changes.
The conflict is based on two main issues: the lack of a standard algorithmic bias definition and the lack of a mechanism for holding stakeholders accountable. Northpointe was also not accountable to any specific set of values. Thus, courts that continue using it are also not accountable.
Recommendations for ensuring accountability in AI systems
AI governance strategies can help to deal with the issues of accountability. For example, the recently proposed EU AI Act suggests that AI systems should be categorized into risk levels premised on their application field.
Many stakeholders should participate in designing, evaluating, and deploying AI systems to explain new risks, to make sure that benefits are actualized for major stakeholders and to redesign an accountability framework. Participatory methods and techniques should spur meaningful stakeholder participation in the process of selecting the AI system’s overall objective, the model’s development, the evaluation of the behavior of AI, and the evaluation of whether the AI is achieving its intended outcomes.
Industry best practices should also be developed, new incentives for industry compliance established, and a clear articulation and enforcement of regulations done.
While numerous algorithms are proprietary information, skilled journalists can tap into “reverse-engineering” techniques to probe what is in the black box. They can also engage in collaborative research with whistleblowers and academics, especially for personalization algorithms.
Companies should also ensure that their policies are accessible and clear to development and design teams from the first day to ensure that no person is confused regarding the issues of accountability. Furthermore, companies should be aware of the limitations of their responsibility and the software they develop. They may not have control over how a tool or data can be used by a client, another external source, or users. Detailed records of the design processes and decision-making should also be kept. A strategy should be determined for keeping records during the development and design processes in order to encourage iteration and best practices as well as for accountability and understanding where something went wrong. This is because records can be used to track the development process, identify potential problems, and investigate incidents.