The importance of freedom of expression in AI systems
AI systems should be developed and used in a way that respects individuals’ right to freedom of expression, including their right to access and share information and opinions without interference. Since everyone has a right to freedom of expression and opinion, AI systems should be developed in a way that does not disturb or limit their opinions or the right to seek and disseminate ideas and information by any means of expression. In addition, embedding this right in AI systems is important for ensuring that governments or entities do not limit it by any form or type of censorship. It would also make perpetrators of infractions committed liable for civil, criminal, and disciplinary liability.
Description of how lack of freedom of expression can harm individuals and communities
The lack of freedom of expression means that AI can be utilized for censoring and blocking access to apps and websites. It can be used to suppress dissent and control the flow of information. This can harm individuals and minority communities as it can be deployed to limit legitimate free speech and encroach on their ability to express themselves. AI systems that have not been trained in the languages or related slang of minority groups can censor legitimate speech.
Examples of harmful outcomes resulting from lack of freedom of expression
China AI-generated censorship: China’s censorship on religious, social, and political topics will most likely impact AI-generated content. For example, if an ML tool mainly obtains information from the Great Firewall of China, its outputs could reflect the biases and suppressions of the propaganda-infused and heavily censored information landscape. A specific example is the new AI-powered text-to-image model ERNIE ViLG released by Baidu to compete with DALL-E and Mudjourney. What makes it unique is that it can understand prompts in Chinese and generate anime art, as well as capture Chinese culture better than other tools. Notwithstanding, ERNIE-ViLG’s users established that it blocks certain words linked to politics, including Tiananmen Square, “climb walls” and “revolution” and names of famous political leaders. The MIT Technology Review has established that while words such as “government” and “democracy” are permitted, prompts combining them with other words, including “British government” or “democracy Middle East” are blocked. The review also noted that the term Tiananmen Square within Beijing also cannot be found in ERNIE-ViLG likely due to its relationship with the Tiananmen Massacre, which are heavily censored references within China.
Google’s Perspective: This example demonstrates the prevalence of bias which may not be constrained to racial prejudices. An ML tool Google developed, known as Perspective, tried to categorize tweets on Twitter from “toxic” to “healthy”. While the algorithm was familiarized with offensive comments on social media and requested to conduct an evaluation of Twitter posts, it demonstrated a tendency to target tweets in African-American colloquial English, compromising the neutrality of AI and its ability to promote freedom of expression.
Recommendations for ensuring freedom of expression in AI systems
Rights-oriented approaches: When governments depend on AI systems for executing core public functions, they should ensure that their operation and design conform to international human rights standards through various audits, public consultations, and due diligence. As one of their mandates to promote freedom of expression, it is important for governments to ensure that the private sector adopts rights-oriented approaches to AI. This should include measures geared toward ensuring a competitive, pluralistic, and diverse information environment.
Companies should also ensure that AI and other technologies deployed on their platforms or sold to third parties resonate with human rights standards. Besides ethics-based frameworks, the design and deployment of AI to generate, gather, and analyze information regarding end users should treat international standards on freedom of expression and opinion as their authoritative reference point.
Accountability and Transparency: State, technical, and corporate actors should allow for meaningful multi-stakeholder participation, including civil society actors, in setting regulations, technology policy, as well as industry guidelines for AI systems to ensure the transparency and legitimacy of outcomes. More specifically, non-binding frameworks should be accompanied by robust oversight and accountability measures.