Multi-stakeholder Collaboration

The importance of multi-stakeholder collaboration in the development and use of AI systems

AI development should be informed by and involve public participation, particularly from those groups who are most affected by its use. In addition, AI development should involve collaboration between different stakeholders, including governments, civil society organizations, academia, and the private sector. Also, that includes input from experts in fields such as ethics, law, and social science.

Stakeholders are persons with a vested interest or influence within AI product development. Thus, there is a need to identify and prioritize the main stakeholders and their roles, concerns, and expectations. It is important to communicate the value proposition, vision, and roadmap of AI product development to stakeholders, and solicit and embed their requirements and feedback into the acceptance criteria and product backlog.

AI product development should be demonstrated and showcased to stakeholders and their feedback should be collected. A multistakeholder approach is important because of the power of AI to amplify unfair biases. There is a particular risk that developers and data scientists could commit “causation mistakes” whereby a correlation may be inappropriately perceived to indicate a cause and effect.

Such lack of understanding results in designs premised on incorrect, oversimplified causal assumptions that disregard important societal factors and can result in harmful and unintended outcomes. To mitigate this risk, the societal context should be embedded into the development of AI systems.

Despite this, no algorithm or individual person can see the complexity of society within its entirety or fully comprehend it. As a result, to address these inevitable blind spots and ensure responsible innovation, technologists should collaborate with stakeholders to develop a shared hypothesis of how they function. This means that collaborative efforts can lead to more inclusive and socially responsible AI.

Recommendations for ensuring multi-stakeholder collaboration in the development and use of AI systems

Users: Users should be involved in the AI development process from the ideation to evaluation stages. To achieve this, user research should be done to understand user needs, expectations, and pain points. In addition, there is a need to co-create user scenarios, journeys, and personas to represent the user contexts and segments. Real users should also prototype and test AI products. More importantly, user behavior and feedback should lay a foundation for the improvement of AI products.

Acceptance Criteria: These refer to conditions that should be met by an AI product for users and stakeholders to accept it, helping to align the goals and expectations of the stakeholders, the users, and the product team. For AI products, it is critical for acceptance criteria to move beyond the functional and non-functional requirements to focus on the AI solution’s legal, ethical, and social implications. Some of the examples of such criteria are explainability, reliability, and accuracy of the AI solution, respect for the security and privacy of users as well as their data, conformance to the relevant regulations and laws, and avoidance of harm, discrimination, and bias to the society and users.

Map the Stakeholders Responsible for AI Governance: Developing standards is often seen as a technical task, but it is important to involve a wide range of people, including researchers, governments, and businesses in the process of creating, disseminating, and enforcing standards. This helps to build trust and ensure that the standards are widely accepted and adopted.