Sectoral Governance of AI as an Alternative to a Comprehensive Law in Egypt

Brief Context

In recent years, Egypt has rapidly expanded its presence in the field of artificial intelligence. This has included ministerial partnerships with local and international companies, as well as hosting major regional events such as the AI Everything Middle East & Africa Summit, scheduled for February 2026.

More recently, official and media discussions have highlighted the imminent issuance of a comprehensive AI law in Egypt, which is being promoted as a proactive step to keep pace with global transformations. While this momentum demonstrates a genuine ambition for modernization and digital transformation, it also carries the risk of hasty legislation shaped more by political and economic considerations than by robust standards for protecting rights and freedoms.

Masaar has presented a research paper on the regulation of AI, proposing a set of principles and standards for ethical governance. These include security, transparency, inclusiveness, fairness, human oversight, the prohibition of mass surveillance, impact assessment, and the ban on autonomous weapons.

Masaar believes that these principles constitute a rights-based framework that can serve as a practical alternative to comprehensive legislation. Such a framework could be translated into sectoral policies and flexible regulatory measures that keep pace with the rapid and complex evolution of technology, while also ensuring the protection of citizens’ fundamental rights.

Introduction

Enacting a comprehensive law on artificial intelligence in Egypt at this stage entails a significant risk. It may result in a rigid text incapable of keeping pace with the rapid advancement of technology, or in an overly broad legislative instrument that undermines human rights safeguards, particularly in relation to privacy and equality. It could also introduce stifling bureaucratic constraints that hinder innovation and creativity.

Masaar believes that the optimal alternative lies in adopting a set of gradual, sector-specific regulatory tools that respond to the particularities of each field, especially sensitive sectors such as healthcare. These tools should be grounded in human rights principles that explicitly prohibit the most harmful practices, such as violation of the right to privacy or discrimination based on gender, religion, race, or sex.

Furthermore, they should focus on activating existing pillars, such as the unenforced Personal Data Protection Law, and on reviewing other relevant legislation, including the Anti-Cyber and Information Technology Crimes Law, the Press and Media Regulation Law, and the Telecommunications Regulation Law, before considering the introduction of any additional legislative layers.

In this context, the paper offers an alternative vision for addressing the challenges of artificial intelligence. Its goal is to protect citizens’ fundamental rights from the risks of exploitation and discrimination, while encouraging responsible local innovation without excessive legislative burdens. It also seeks to bridge institutional gaps through flexible regulatory tools that can be continuously updated, ensuring a practical response to rapid technological change while respecting human rights.

Why We Reject an Overarching AI Law Now

Masaar maintains that enacting a comprehensive AI law in Egypt at present would not constitute an effective tool for protection and regulation. Instead, it would produce a series of negative consequences that harm rights and innovation more than they serve them.

First: applying a single unified law to highly diverse sectors such as health, education, finance, and media creates a fundamental regulatory imbalance. Each sector has its own particular nature, as the objectives and strategies for AI governance in the health sector differ significantly from those required in media or finance, for example. Combining these sectors under one law would either impose disproportionate burdens that stifle innovation or introduce broad exceptions that strip the text of substance and weaken the principle of equality before the law, resulting in weaker protection of human rights.

Second: the accelerating nature of AI development renders any law based on fixed definitions or standards susceptible to obsolescence within a short period. Technologies such as large language models or multimodal generative AI are evolving at an unprecedented pace, which means that rigid provisions quickly become constraints that lag behind reality. This legal inadequacy opens the door to exploiting loopholes, justifying harmful practices, and transforms the law into a tool for restricting research or curtailing rights under the pretext of compliance with outdated definitions.

Third: the current political context in Egypt further heightens the risks. The official push toward international partnerships and high-profile summits creates pressure to enact hasty legislation that serves as a modernization showcase abroad more than it reflects a genuine response to safeguarding rights domestically. Such laws are usually passed without meaningful public consultation or human rights impact assessments, rendering them formalistic texts—closer to political declarations than enforceable rights guarantees at best—or, at worst, instruments that contribute to the further erosion of rights and public freedoms.

Fourth: EIntroductiongypt’s previous legislative experience reveals the risks of bureaucracy and over-regulation. Technology-related laws have either remained unenforced—such as the Personal Data Protection Law issued in 2020, which remains suspended pending the issuance of its executive regulations—or have been used to restrict rights, as seen with the Anti-Cyber and Information Technology Crimes Law and the Press and Media Regulation Law. These legislative precedents demonstrate that overarching laws often end up weakening privacy, freedom of expression, and equality rather than protecting them, while leaving individuals without any effective enforcement mechanisms.

Fifth: the attempt to codify general regulatory rules for AI within a single law creates conflicts with existing legislation, such as the Telecommunications Regulation Law, the Consumer Protection Law, the Cybercrime Law, the Press and Media Regulation Law, the Intellectual Property Protection Law, and the Personal Data Protection Law. It also creates legislative gaps in areas intersecting with AI, paving the way for direct violations of fundamental rights and hindering individuals’ ability to challenge such violations or obtain judicial redress when infringements occur.

Sixth: in a political environment characterized by weak transparency and accountability, the existence of a unified AI law could lead to grave risks, such as legitimizing mass surveillance, large-scale facial recognition, or the targeting of activists and journalists under the guise of legal compliance. Such practices violate individual privacy, threaten freedom of assembly and movement, and erode public trust in state institutions.

Seventh: A comprehensive AI law may impose heavy compliance burdens on start-ups, universities, and research centers. From prior licensing requirements to extensive documentation obligations and costly periodic reviews, such measures would overburden smaller entities and limit their capacity for innovation, while favoring larger corporations that can absorb the costs. This could weaken local innovation and accelerate the brain drain, thereby deepening technological dependency instead of strengthening national capacities.

Accordingly, a comprehensive AI law in Egypt, under the current circumstances, would neither provide genuine rights protection nor establish regulatory certainty. Instead, it would replicate past failures and exacerbate risks to privacy, equality, freedom of expression, and innovation. Masaar maintains that the more realistic and effective alternative lies in developing flexible, sectoral governance that is risk-based and translates human rights principles into practical obligations that are enforceable and subject to accountability.

Diversity of Risks and Sectors

A “one-size-fits-all” approach to enacting a unified AI law is not suitable. Artificial intelligence is not a standalone sector but a cross-cutting technology that runs through value chains in very different domains. In healthcare, AI is directly linked to patient safety and the protection of highly sensitive data. In the media, it reshapes the flow of information and influences the balance of public discourse. In education, it impacts assessment methods and concerns students’ privacy. In the workplace, it redraws power relations through what is known as “algorithmic management.”

This disparity extends beyond differences in application domains and encompasses four interrelated regulatory layers. The first concerns the degree of risk and the potential harm a system may cause. The second relates to the nature of the data used to train models, including whether such data are sensitive or lawfully obtained. The third involves the characteristics of the affected parties and the avenues of redress available to them—whether patients, workers, technologists, students, journalists, digital artists, or the broader public receiving content. Ultimately, enforcement environments and the competence of regulatory bodies significantly influence the effectiveness of oversight and accountability.

Attempting to fuse all these cases into a single legal framework, in practice, leads to two contradictory yet harmful outcomes:

The first: is the imposition of uniform rigid requirements that stifle experimentation and innovation, while restricting access to knowledge and freedom of expression. The second: is permitting broad exemptions from general rules, whether for specific sectors, government entities, or large corporations.

These exemptions create disparities in protection levels, undermine the principle of equality before the law, and reduce transparency to a mere formality devoid of meaningful oversight. This underscores the need for a sector-specific approach that establishes tiered obligations based on risk and links duties to auditable operational guidelines, not general slogans.

The following examples illustrate that regulating AI through a single overarching law is impossible without either harming rights or stifling innovation. They demonstrate that the practical solution lies in specialized regulatory frameworks that take into account the specific characteristics of each sector.

  • Intellectual Property: AI in this area reveals the danger of converting publicly funded or collectively produced research and data into private property by training models on them. This approach undermines the principle of open science and perpetuates the monopoly of knowledge rather than making it available as a public good.
  • Labor Rights: Algorithms are reshaping power dynamics in the workplace, controlling wages and performance evaluations without transparency or avenues for appeal. This calls for mandatory disclosure of performance metrics, guaranteed rights to human compulsory review, and strict limits on excessive surveillance, along with a requirement for collective bargaining with unions before introducing systems that alter working conditions.
  • Education: AI tools risk transforming schools into disciplinary surveillance spaces, exploiting student data for commercial purposes, or excluding students from poor and under-resourced communities. This creates an urgent need for principles that establish educational policies to ensure equitable access to open-source tools, impose strict limitations on data collection and analysis, and recognize students’ rights to challenge automated decisions.
  • Media: Algorithms that distribute content govern access to information and shape public discourse. This creates a need to mandate platform transparency regarding ranking logic, require clear labeling of AI-generated content, provide swift mechanisms for response and correction, and prohibit high-risk behavioral manipulation, particularly in political contexts.
  • Environment: Artificial intelligence consumes vast amounts of energy and generates significant electronic waste. General measures are insufficient; instead, it is necessary to mandate transparent reporting on environmental impact, link government procurement to green standards, and require mandatory lifecycle management plans for hardware.
  • Healthcare: The highest-risk sector, where any algorithmic error may threaten patients’ lives. Consequently, distinct priorities emerge here compared to other sectors, including: mandatory human rights and clinical impact assessments before and after deployment; effective human oversight of medical decisions; strict governance of health data; a ban on practices that violate dignity, such as the use of facial recognition technologies; and the establishment of national mechanisms for reporting medical incidents.

The Alternative: Flexible, Multi-Layered Governance

Criticizing the “one-size-fits-all” approach of a unified AI law does not mean accepting a legislative vacuum. What is needed is a governance framework that treats rights as the foundation and policy as the guiding control. Instead of a comprehensive law acting as a broad but empty umbrella, we propose a multi-layered governance model. In this model, rights are binding conditions for approving or operating any AI system, not just formal clauses written into legislation.

This governance structure rests on three interrelated pillars:

  • A common baseline that sets the non-negotiable human rights floor. This includes effective data protection, auditable transparency, the right to object and appeal, and explicit prohibitions on the most dignity-eroding practices—such as mass surveillance, large-scale facial recognition, and social profiling.
  • Sector-specific units that translate obligations into precise rules aligned with the particularities of each domain. For example: safeguarding patients in healthcare, ensuring editorial transparency in the media, protecting academic freedom in education, and guaranteeing equitable access for all socio-economic groups.
  • Horizontal coordination to prevent each sector from becoming an isolated island, while ensuring continuous updating of the rules so they remain fit for technological developments and emerging human rights risks.

With this structure, we reject the false binary of either rigid, overarching legislation or regulatory void. Instead, we build a rights-based policy framework capable of both protecting people and adapting to change.

Translating Principles into Enforceable Rules

The human rights principle must be translated into a binding legal rule: no system should be approved before its necessity is examined, less intrusive rights alternatives are considered, and the costs of error are assessed, particularly for vulnerable groups. In this way, privacy, non-discrimination, and freedom of expression become mandatory gateways for approval, not merely abstract principles that can be easily bypassed by administrative decision.

  1. 1-Participation as a Core Obligation

The political strength of sectoral governance lies in mandatory participation: civil society, unions, workers, and users are not “consultative partners” but decision-making stakeholders. Their majority presence in rule-drafting committees is a fundamental condition for ensuring balanced regulation, and any rule issued in their absence should be deemed defective. This deliberate bias is a political necessity to guarantee that governance reflects the interests of those directly affected by technological developments, rather than the interests of bureaucracies or large corporations.

  1. 2-Equitable Access and Risk Distribution

A rights-based approach to governance rests on practical political commitments aimed at narrowing the gap between the center and the peripheries. This is achieved through the adoption of open-source, low-cost tools in critical sectors such as education and healthcare, and through the mandatory inclusion of women, persons with disabilities, and workers in all consultation processes.

It also entails drafting policies in clear, accessible language that ordinary citizens can understand. Success in this context is measured not by the number of regulations enacted but by their actual capacity to reduce disparities and protect the most vulnerable groups.

  1. 3-Risk-Based Tiered Obligations

A blanket licensing regime cannot be accepted, as it stifles innovation and treats vastly different sectors as though they were identical. Instead, sectoral governance is grounded in the principle of tiered obligations that correspond to the scale of impact and risk.

For example, low-impact systems would be limited to requirements of registration and disclosure, while medium-impact systems would be subject to higher levels of transparency and periodic risk assessments. High-impact systems, by contrast, would face stringent controls, including independent review, clearly defined red lines, emergency shutdown mechanisms, and guaranteed compensation for affected individuals.

  1. 4-Dismantling Monopolies and Preventing Dominance to Ensure Fair Competition

Developing governance frameworks for AI is essential to mitigate the risk of capture by the state or large corporations. In this context, sectoral governance becomes a tool aimed at preventing such dominance. Achieving this requires alleviating compliance burdens for universities and small enterprises, and leveraging public procurement to hold large companies accountable for transparency and fairness. Through this mechanism, decision-making authority is redistributed from top to grassroots levels, and from monopolistic companies to local actors.

  1. 5-Continuous Development

Sectoral governance views the legislative framework as a structure capable of continuous learning and adaptation. It includes a public incident registry, periodic updates to standards, and performance indicators to measure the state’s capacity for protection—from the speed of reporting violations to the compliance costs borne by smaller actors. These mechanisms ensure that oversight and accountability remain active and meaningful practices, rather than mere formal procedures.

Accordingly, sectoral governance represents a political, rights-based, social, and economic choice that redefines the foundations of regulation. It establishes rights, such as privacy, equality, and freedom, as mandatory prerequisites for the operation of any system, and provides genuine protection against the risks of formalistic, restrictive legislation or unchecked market activity.

Recommendations: Towards a Flexible Sectoral Governance

This paper primarily aims to emphasize the rejection of a comprehensive AI law in Egypt at the present time. Instead, it advocates for adopting flexible sectoral governance grounded in human rights, as well as political, economic, and social principles.

This governance framework shifts the focus from broad, overarching legislation to mandatory participatory procedures, placing civil society, labor unions, workers, and end-users at the forefront in drafting, amending, and monitoring the enforcement of rules. Sector-specific approaches only achieve their purpose if they are participatory and transparent, capable of protecting rights and preventing a slide into symbolic legislation or overregulation.

The current Egyptian context remains constrained by weak transparency and restrictions on civil society. In this environment, the proposed mechanisms function, for now, as normative principles that can be gradually applied through initiatives led by civil society and labor unions. These may take the form of shadow reports or parallel consultations that build cumulative pressure for their formal adoption.

The first priority: therefore, is to establish mechanisms that guarantee the representation of civil society, labor unions, and groups directly affected by AI. This requires selection rules designed to resist institutional capture. Such rules include independent nomination committees, transparent criteria to prevent conflicts of interest, and mandatory public disclosure, as well as fixed-term memberships that cannot be immediately renewed. They also include compulsory representation of marginalized groups such as women, persons with disabilities, platform workers, and residents of peripheral regions.

Second: participation must be transformed into an auditable commitment through three key tools:

  1. 1-A public register that lists every AI system classified as “high-risk,” with clearly defined risk levels and corresponding obligations. High-impact sectors such as health, education, labor, and media are subject to stringent requirements. In contrast, low-risk uses, such as applications that assist with everyday tasks, are treated with greater flexibility. Guidance can be drawn from the “Public Registry” model stipulated by the European Union in its AI Act.
  2. 2-A public comment window with standard timeframes that allows for substantive review, followed by a response matrix detailing how each comment is addressed, whether adopted or rejected, with reasons provided. This mechanism is used by the European Commission and could be adapted locally through unions or independent platforms.
  3. 3-Public Hearings to which representatives of unions, civil society, and researchers are invited. The minutes of these hearings must be fully documented and published, as implemented in certain legislative experiences in Latin America, such as in Brazil.

To ensure practical implementation, an official digital platform must be established. This platform should host the public registry, enable the uploading of drafts and comments, and be obligated to generate a detailed response matrix. Recognizing the limited digital access for some groups, alternative channels should also be provided, such as in-person hearings organized by trade unions or civil society organizations, to broaden participation and include underrepresented communities.

Third: regulation must be linked to the principle of “no impact without representation,” whereby no high-impact system may be deployed without meeting three conditions and ensuring that affected groups are genuinely involved in oversight and accountability. The conditions are:

  • A human rights impact assessment that includes direct testimonies from those affected by the system, ensuring their experiences are fully considered.
  • A multi-criteria impact assessment (rights-based/political/economic/social) that reviews the algorithm’s functioning, outputs, and decisions within their real-world context.
  • An auditable transparency dossier detailing data sources and quality, performance limits, and failure scenarios, enabling researchers, trade unions, and civil society to hold the operator accountable.

Fourth: government procurement must be used as a lever; no contracts should be awarded to any supplier for public utilities unless they commit to providing Model Cards—detailed documents summarizing the system’s purpose, the data it was trained on, its performance, limitations, and associated potential risks.

Additionally, suppliers must submit periodic performance reports, grant civil society and academia the right to independent audits, establish clear plans to address biases, and implement binding complaint mechanisms with defined response timelines. Priority should be given to open-source alternatives and suppliers committed to environmental transparency and data provenance.

Fifth: empowering civil society requires providing independent and sustainable funding sources dedicated to supporting field research, strategic litigation, and covering the costs of experts and consultants. In addition, technical language in texts and policies should be simplified to ensure clarity and accessibility for the general public, thereby enabling broader participation.

In the absence of official state funding, initial initiatives can be launched in partnership with international organizations or through independent funding sources to eventually transform these initiatives into permanent institutional obligations. Additionally, a unified and independent channel should be established to receive complaints and grievances, empowered to issue temporarily binding recommendations pending final resolution. This provides civil society with a practical and effective tool for advocacy and accountability.

Sixth: mandatory registers of algorithmic incidents should be established, with risk classifications and clear timelines for reporting, as well as periodic public reviews to assess the quality of responses. Civil society and trade unions should be granted the right to litigate on behalf of affected individuals, with whistleblower protections in place. Even in the absence of formal recognition, these practices can begin as shadow reports, gradually evolving into a continuous tool for advocacy.

Seventh: participation should be scaled according to risk; the greater the impact on rights, the more stringent the participation requirements, including the involvement of stakeholders from the design phase. Participation here serves as both a political and practical safeguard against repeating past legislative experiences—such as the Cybercrime Law or the Press and Media Law—which were drafted in isolation from civil society and resulted in the constriction of freedoms and the restriction of the public sphere.

The sectoral governance approach represents a realistic alternative to a comprehensive law. It creates a flexible, bottom-up framework that curbs arbitrariness and grants impacted groups genuine authority over the lifecycle of systems, transforming human rights principles into enforceable and accountable obligations.

Conclusion

Artificial intelligence does not confront us as a neutral technological domain, but as an intertwined social, legal, political, and economic issue that reshapes our relationship with work, knowledge, and fundamental rights. These challenges cannot be contained through a hastily drafted comprehensive law, which risks reproducing the shortcomings of previous legislation in technology-related fields.

This paper has demonstrated that a “one-size-fits-all” approach is impractical and carries direct risks, such as stifling local innovation, weakening constitutional safeguards, and paving the way for the legitimization of surveillance and discrimination within a political environment with limited participation.

The alternative we propose is the purposeful construction of flexible sectoral governance, which addresses each domain according to its specific risks and characteristics, placing fundamental rights at the core of regulation as a precondition for any rule. This governance framework becomes meaningful only if it is genuinely participatory, granting civil society, trade unions, and affected groups a central role, and linking the legitimacy of any policy or regulation to the transparency and credibility of the participation involved.

It is also dynamic governance, capable of adaptation through continuously updated guidelines, public registries, response matrices, and human-rights impact assessments, transforming rights-based principles into practical, enforceable, and accountable obligations.

What Egypt needs is not a new, rigid legal text, but rather the activation of existing pillars, such as the Data Protection Law, to bridge legislative gaps in the digital environment and create space for tiered regulatory practices that prevent arbitrariness and foster trust. Only this approach can strike a balance between protecting rights and promoting innovation, ensuring that artificial intelligence remains a tool in the service of humanity.