The importance of ensuring the ethical use of AI in law enforcement
The development and use of AI in law enforcement should be guided by ethical principles, including respect for the presumption of innocence, the right to due process, and the avoidance of discriminatory or biased outcomes. While the use of AI in law enforcement can help improve public safety, there are concerns about bias, accuracy, and privacy.
The data used to train AI systems can be inaccurate or incomplete. This can happen if law enforcement enforcers make mistakes when entering data into the system, or if they ignore important data. Criminal data is often unreliable, which can make the problem worse.
In addition, the data can be biased, with some criminal populations and areas being over-represented. Overrepresentation can also emerge from periods when the police took part in discriminatory practices against some communities, hence leading to the classification of certain areas as high risk.
Such implicit biases within historical data sets pose detrimental consequences for targeted communities. In this regard, using AI predictive policing can aggravate biased analyses and in some cases has been linked to racial profiling.
A specific example is the cases of Vernon Parter (a 41-year-old white man previously found guilty of armed robbery and imprisoned for five years) and Brisha Borden (an 18-year-old black girl without any previous convictions). Both were charged with stealing items worth around $80 in 2014. However, the AI algorithm COMPAS utilized by police departments for conducting risk assessments suggested that Borden was vulnerable to a high risk of future conviction, whereas Mr. Parter was assigned a low risk, demonstrating that the AI algorithm was biased against Ms. Borden.
Thus, efforts should be made to ensure that law enforcement’s use of AI is ethical and respects human rights. This is critical for eliminating selection bias that often manifests in the over-policing of certain individuals or neighborhoods.
Recommendations for ensuring the ethical use of AI in law enforcement
Inclusive Approach: An inclusive approach should be embraced to ensure that AI technology benefits are accessible and available to all, considering the specific needs of various language groups, age groups, cultural systems, disadvantaged, vulnerable, and marginalized persons. Governments should strive to promote inclusive access for all to AI systems. They should also seek to address digital divides and ensure inclusive access to and participation in AI development.
Human Oversight and Determination: States should ensure that it is possible to attribute legal and ethical responsibilities for any stage of AI systems’ life cycle and in cases of remedy-related responsibility to AI systems, physical persons, or existing legal entities. In addition to the right to challenge AI’s decisions.
Develop Ethical Guidelines: These guidelines should be developed in consultation with a wide range of stakeholders, including law enforcement agencies, civil society organizations, and academics. The guidelines should be based on human rights principles and should address issues such as transparency, accountability, fairness, and non-discrimination.
Provide Training to Law Enforcement Officers: This training should cover the potential benefits and risks of AI, as well as, the ethical guidelines that have been developed. Officers should also be trained on how to identify and address bias in AI systems.
Make AI Systems Transparent and Explainable: This will allow law enforcement officers and the public to understand how AI systems make decisions and to identify and challenge any biases that may be present.