The importance of prohibiting mass surveillance in AI systems
AI systems should not be used for the purpose of mass surveillance, or the collection and analysis of personal data without prior consent or a legitimate legal basis. Prohibiting mass surveillance in AI systems is important for mitigating the high risks related to the procurement and development of surveillance technologies. For example, it helps to deter the inaccurate and unfair profiling of communities of color. Doing so also helps to protect fundamental human rights, especially those that could be violated by biometric mass surveillance.
Description of how mass surveillance can harm individuals and communities
Data collection and surveillance have disproportionately impacted communities of color under both current and past circumstances. Surveillance patterns usually depict the existing societal biases and build on virtuous and harmful cycles. Facial recognition together with other mass surveillance technologies also leads to more precise discrimination, particularly as law enforcement agencies continue making predictive and misinformed decisions about arrest and detainment that disproportionately affect marginalized populations. Mass surveillance also entails geolocation tracking that poses potential harm as it allows for the physical pursuit of an individual and allows entities to deduce extraneous details such as religion, health, orientation, or personal relationships. Biometric mass surveillance infringes on the freedom of expression and open public participation since the feeling that one is being watched makes people change their behaviors and self-censor.
Examples of harmful outcomes resulting from mass surveillance in AI systems
China Initiative: Launched by the US Department of Justice (DOI) in 2018, the China Initiative was a mass government surveillance effort geared towards preventing intellectual property theft and espionage, and formally ended in February 2022. While the initiative sought to tackle national security threats from the government of China, it led to wider distrust and racial profiling of Chinese American academics, including US citizens who did not have relations with the Chinese Communist Party. The Initiative also resulted in a number of false arrests of academics. The China Initiative was canceled after it proved to be ineffective and because it created a climate of fear among Asian Americans. Many Asian Americans felt that they were being unfairly targeted by the initiative and that they were being treated as potential spies.
Recommendations for prohibiting mass surveillance in AI systems
Improvement of current laws: There are numerous gaps in current privacy laws, meaning that the data collected and shared is not challenged. Thus, there is a need for privacy legislation to prohibit the use of AI for mass surveillance and to govern how private sector firms embed fairness within the technical development process and reduce their data collection as well as third-party sharing.
Equity Assessments: The role of equity assessments should be extended to appraise facial recognition’s appropriateness and privacy implications for marginalized communities. Algorithmic impact assessments can also be crucial for aiding companies or government agencies to evaluate the potential community harms, discrimination, or risk of bias.