Introduction
Recently, attention has increasingly turned to the possibility that the widespread use of digital technology might lead to workers losing their jobs due to being replaced by automation and artificial intelligence (AI) technologies. However, in-depth studies have shown that the impact of technology on job quality—namely wages and working conditions—is an equally significant concern, if not greater than the risk of losing the job itself.
Although technology has been present in workplaces for quite some time, its significant and rapid advancement has introduced new applications that have not always worked in favor of workers. Employers have increasingly turned to technology to manage various aspects of their relationships with their workers and employees. Today, technology plays a prominent role in recruitment processes, monitoring and surveillance of workers and employees in the workplace, and performance evaluation processes.
The intensive use of technology by employers creates numerous challenges for workers. These include the risk of discrimination and unfairness in accessing jobs and violations of their privacy to collect large amounts of data about them without prior consent, both within workplaces and sometimes within their homes when working remotely.
Workers are also subjected to increasing psychological pressures and restrictions on their right to freedom of expression and other rights, both inside and outside the workplace, due to unfair and non-transparent censorship and surveillance policies. And finally, they face the risk of unwarranted penalties resulting from unclear or non-transparent evaluation mechanisms. This could lead to arbitrary dismissal/termination, being overlooked for promotions and skill development opportunities, and can affect the calculation of their paid leave.
Workers lack the legal protection and the mechanisms of collective negotiation to confront these challenges. Tackling this situation requires a deeper understanding of the technologies currently used in workplaces, along with recommendations for amending policies and legislation to safeguard workers’ rights in this digital age.
This paper provides an overview of the new realities created by using digital technology in workplaces and the resulting challenges and threats it poses to workers’ rights. The paper first addresses three key aspects related to employment policies, censorship and surveillance of workers, and performance appraisal systems.
The paper also highlights the key features of workers’ digital rights, aiming to achieve fairness in employment policies, protect the right to privacy, promote transparency in the use of technology in workplaces, and ensure justice in performance appraisal policies.
How to Use Digital Technology in the Workplace
A report issued in 2022 revealed that the adoption of AI applications by economic entities worldwide had doubled in the five years preceding the report’s release. The same report noted that despite the doubling of these entities, their proportion relative to the total number of economic entities in different regions of the world remained steady at around 50% to 60%.
This temporary stagnation was disrupted by the Generative AI revolution of 2022, which began to bear fruit in the current year, 2024. Another report, published in May 2024, indicates that 65% of representatives of economic entities who participated in a global survey confirmed that their workplaces regularly adopt generative AI applications.
The adoption of generative AI applications has led to a 72% growth in the use of AI applications in economic entities. The report’s authors noted that this growth in AI application usage is global, with more than two-thirds of survey respondents confirming that their workplaces use these applications across all world regions. By various economic sectors, the report indicates that the professional services sector has achieved the highest increase in the use of AI applications.
Employment Policies and Candidate Evaluation
In a report published by LinkedIn in 2018 on employment policy indicators, respondents to a survey conducted by the report’s authors stated that AI applications had revolutionized recruitment processes. About 67% of them affirmed that they preferred using AI applications because they save a significant amount of time.
Interestingly, 43% of respondents stated that AI applications help eliminate human bias. Recruitment officials in several companies believe that using AI applications enhances diversity in terms of gender and race and helps overcome biases related to graduating from the most prestigious universities.
In practical application, a report from Fortune reviews some of the AI applications used by major companies in their recruitment processes. One of these applications, developed by a startup called Pymetrics, assesses the personality traits of job applicants through computer games.
These games are designed to measure abilities such as the capacity to remember and retrieve large numbers or an individual’s risk tolerance. The system evaluates job candidates by comparing their performance in the various games with the results achieved by the top 250 employees in the same company.
The system marketed by Paymetrics is one of many that companies worldwide are increasingly using today. What they all have in common is that none of them are based on a solid scientific foundation. They are based, at best, on statistical estimates or perceptions accumulated through experience. The trust that business owners place in these systems is based on the ability of digital technology to collect an increasing amount of data and process it using AI applications.
The vast amount of data used to feed these systems suggests that they cover as many variables as possible that are relevant to evaluating job applicants. On the other hand, the processing capabilities of these data suggest that all the complex relationships between different variables can be considered.
In fact, these suggestions are inaccurate. This is because the selection of the data fed to AI systems is based on prior judgments about which variables influence a person’s suitability for a job. These judgments estimate the different weights of the variables.
Contrary to the perceptions or hopes of business owners, AI systems have shown a significant tendency to produce biased results regarding gender, race, etc. This is because the data these systems rely on is inherently biased; that is, it reflects the biases of the humans who produce it, whether directly and consciously or indirectly and unconsciously.
Attempts to overcome the bias of AI outputs by hiding some of the data they are fed can achieve some success in mitigating this bias. However, data filtering has practical limitations that do not eliminate indirect biases hidden in the data in the form of the relationships of data elements to each other.
For example, references to a person’s color can be removed from the data used; however, a combination of other data points may still indirectly reflect racial bias based on color. This can ultimately lead to biased outcomes based on color.
Many AI-driven recruitment systems are criticized for their lack of transparency. Numerous individuals report that, at the end of a job application process, they either receive no response at all or are sent a vague rejection notice without any clear explanation for their disqualification. Employers often do not disclose—or, in some cases, are themselves unaware of—the specific data used by these systems to reach their decisions, how that data is processed, and what criteria were used for evaluation.
Monitoring and Surveillance in the Workplace
Monitoring workers in the workplace dates back to the beginning of the Industrial Revolution and the emergence of wage labor as the dominant form of production relations. Employers sought to exercise strict surveillance over their employees’ behavior throughout working hours, whether to maximize productivity or to protect their property from potential sabotage or theft. Modern studies began focusing on workplace surveillance in the 1960s, indicating that such attention preceded the advent of digital technology in this area.
In 1987, the United States Office of Technology Assessment, affiliated with the U.S. Congress, issued a report titled “The Electronic Supervisor: New Technologies, New Tensions.” This report expressed concerns that information technology might grant employers surveillance powers exceeding what is necessary for managing workplace operations.
This highlights that the use of digital technology to monitor workers in the workplace was among the earliest applications of this technology. It also shows that from the beginning, these practices have concerned policymakers.
One of the primary sources of concern highlighted in the report is that digital technology allows employers to collect a significant amount of data about their employees, potentially violating their right to privacy. These concerns about privacy violations have intensified over time due to the substantial advancements in monitoring and surveillance capabilities granted to employers by digital technology. Furthermore, constant and close monitoring of workers in the workplace represents a source of concern regarding restrictions on freedom of movement and freedom of expression.
A report published in 2007 by the American Management Association in collaboration with the Electronic Policy Institute revealed that over a quarter of surveyed employers had terminated employees for misuse of email services. Meanwhile, approximately one-third of employers had terminated employees for internet misuse.
The report indicated that the detailed reasons for termination included excessive personal use, breaches of confidentiality rules, and viewing, downloading, or uploading inappropriate or offensive content. These reasons for termination reflect that monitoring employees’ use of email and internet services involved accessing the content of messages and tracking browsing activity, which represents a violation of privacy.
The report also listed practices, including:
- Blocking websites on the Internet.
- Tracking content and keystrokes.
- The time when these keyboards were used.
- Storing and reviewing computer files.
- Monitoring blogging sites to know what employees write about the company.
- Monitoring social media platforms and tracking employee activities on them.
The report quoted legal experts saying that email messages and other electronically stored information “create written work records that serve as the electronic equivalent of DNA as criminal evidence.” Experts also confirmed that 24% of employers had requested courts to seize email in the context of resolving legal disputes with their employees. Additionally, 15% of employers used their employees’ emails to counter lawsuits filed against them.
The COVID-19 pandemic has increased reliance on remote work, with employers increasingly using digital technology to keep tabs on their employees. According to a Gartner report published in 2022, employers’ use of tools to track their workers has doubled since the pandemic’s beginning, reaching 60%, with expectations to rise to 70% over the next three years. A report by the virtual private network (VPN) service ExpressVPN also revealed that approximately 80% of employers had monitored their remote workers.
According to a BBC report, employers’ use of digital technology to keep tabs on their employees has taken a new turn in recent years. The report cited experts who noted that companies have started using tools to collect more detailed data about workers’ communications, especially as most of these communications now occur through digital channels rather than face-to-face. Additionally, many companies have begun gathering extensive biometric data from their employees, including using webcams to track eye movements to monitor employee focus.
Performance Evaluation Policies
Performance evaluation systems represent the complementary counterpart of monitoring and surveillance systems. However, the emergence of performance evaluation systems lagged behind monitoring systems. This delay is attributed to their reliance on the availability of substantial amounts of data, as well as the computing capabilities and software technologies necessary to process this data.
The advancement of big data and AI technologies has lately led to a rapid rise of performance appraisal systems in the workplace. The rise of these systems has been a key factor in the growing demand for monitoring and surveillance systems and the tendency of these systems to collect more detailed data.
Digital performance evaluation systems use the data collected by electronic monitoring systems on workers within the workplace and, in some cases, outside of it. These systems process the data at various levels depending on the system’s complexity and the type of decisions it is intended to assist with or autonomously make.
The levels of data processing range from organizing it into aggregated statistical formats for comparing an individual worker’s performance over a specific period or between multiple workers to constructing a comprehensive profile of the worker. This profile includes assessments of personal and behavioral traits, predictions regarding their performance and skills evolution, and even their likelihood of leaving the job.
The more complex the information an employee performance appraisal system attempts to conclude, the less the system relies on a solid scientific basis. This means that the system resorts to impressionistic and often random evaluation criteria. Thus, the more digital appraisal systems are relied upon to make more employment decisions, the less accurate the results these systems are likely to produce.
The varying weights an evaluation system assigns to different data sets can significantly influence the decisions it makes. In particular, giving too much weight to some superficial performance indicators, such as vacation rate or punctuality, can unfairly skew the evaluation results. This often comes at the expense of qualitative indicators, such as loyalty to the workplace, innovation capacity, and leadership qualities.
Shifting the burden of making employment-related decisions from human judgment to digital systems inevitably leads to an increased reliance on factors that can be measured or inferred from quantitative data. This shift often comes at the expense of factors that require personal judgment, which are typically qualitative in nature.
As a result, these systems can occasionally produce unfair decisions. Moreover, this reliance on digital evaluations affects the workplace’s relationships between supervisors and subordinates. It may lead to a decline in the overall quality of social relations in the workplace.
The irresponsible use of workers’ personal data in performance appraisal systems can enable discrimination in employment decisions against socially vulnerable and marginalized groups, such as women and racial, religious, or gender minorities. The lack of transparency in these systems makes it difficult or even impossible to detect the biases inherent in their decisions, thereby hindering efforts to confront and eliminate such biases.
Fair Standards for Using Digital Technology in the Workplace
The pursuit of fair standards for using digital technology in the workplace will not be realistic unless it considers the prevailing trends in labor relations over recent decades. There is a general trend toward withdrawing the protective cover previously provided by labor unions and worker associations.
Most wage workers today do not have the opportunity to join unions that represent their interests. Additionally, the increasing use of technology turns many permanent jobs protected by labor laws into short-term contractual or temporary jobs.
Today, labor laws in various countries around the world need updates that take these factors into account. This necessarily affects implementing any digital workers’ rights standards discussed in this section.
Labor laws must provide alternatives for workers to engage in collective bargaining and have their voices heard, even when they lack union organization/representation. These laws should also impose responsibilities on employers towards those contracted to perform temporary jobs. The absence of these safeguards in labor laws renders any digital labor rights standards meaningless because they would lack any obligation to enforce them.
Ensuring Fairness and Impartiality in Recruitment Policies
Labor laws should obligate employers to notify job applicants of the results of their applications’ reviews. If a job application is rejected, the applicant should be notified of the reasons for the rejection.
Employers should notify job applicants in advance if electronic tools or digital software are used for conducting job interviews or for evaluating and processing data collected during the recruitment process.
In particular, employers should notify job applicants in advance if an AI system will be partially or fully used to decide on their employment application. This notification should include a simplified description of the algorithm used and the data it will rely on.
Employers should not collect any data on job applicants beyond what is necessary to evaluate their job applications. Employers should be obliged to erase any digital data on job applicants once their applications are rejected and as soon as the decision is reached.
It is essential for laws to prohibit employers from sharing or selling the personal data of job applicants to third parties under any circumstances. This prohibition should also extend to any data generated using AI algorithms during the process of evaluating job applications.
Procedures should be mandated to filter the data fed into systems for evaluating job applications to exclude any personally identifiable information that could lead to discrimination, such as gender, race, nationality, religion, and similar attributes. Furthermore, relevant regulatory bodies should be granted the authority to review these data filtering criteria to ensure their adequacy.
Job applicants should be able to appeal against decisions to reject their applications if electronic systems are used during the evaluation process. Employers should be obliged to provide all relevant information about the data collected, how it is used, and the electronic systems used to decide on job applications to the bodies that will consider the appeal.
The Right to Privacy in the Workplace and Limits of Performance Monitoring
Currently, with the absence of specific legal regulations, employers can gather or purchase, from third parties, significant amounts of data about their employees, as well as share or sell this data without restrictions. Similar to consumer rights, workers should be entitled to legal protection that governs the collection and use of data by employers. Employees also have the right to full control over their personal information.
Employers should not collect data about their workers and employees unless it is necessary and essential to accomplish the work required. The principle of data minimization applies here. This principle should be applied to personal identity data, biometric and health information, and data related to activities within the workplace, including productivity and algorithmic inferences about performance rates.
The principle of data minimization extends to workers’ online activities and social media. Unrestricted collection of workers’ data without necessity exposes them to risks such as data breaches and employers’ misuse of personal information.
Workers should have the right to access, correct, and download their data. Employers must be responsible for the immediate correction of any inaccurate data.
Workers’ data must be protected and secured against misuse. Employers should not be allowed to sell or share workers’ data with any third party under any circumstances; otherwise, the temptation to violate workers’ privacy by selling their data for financial profit would be immense. In particular, workers’ sensitive, biometric, or health-related data should only be shared if necessary for law enforcement purposes.
Employers should only use electronic surveillance for limited purposes that do not harm workers. This means that surveillance should only be used when it is necessary to enable key business functions, protect workers’ safety and security, or when required by legal obligations.
Exposing workers to monitoring and surveillance must be limited to the smallest number possible. Monitoring systems must collect the minimum amount of data required and employ the least intrusive methods to achieve their objectives. Productivity monitoring systems, in particular, should undergo the highest levels of scrutiny and be reviewed by regulatory bodies responsible for workplace health and safety. This ensures they are not used to push work rates to hazardous levels.
Transparency in the Use of Digital Technology in the Workplace
One of the most significant barriers to regulating the use of digital technology in the workplace is the lack of transparency in current practices. The use of technologies that depend on collecting and processing data in the workplace is still unfamiliar to workers and policymakers.
The lack of transparency often prevents workers from pursuing their rights in many cases, as they are unaware of when and how these rights were violated, even though they suffer the consequences. Job applicants cannot know why a digital algorithm rejected their application, just as some truck drivers may be unaware that their movements are being tracked via GPS.
Employers must provide their workers with clear and easily accessible notifications regarding data-intensive technologies employed in the workplace. These notifications should include an understandable description of the technology used, the types of data collected, and the rights and protections available to workers. Similarly, employers must provide equivalent notifications to regulatory and executive authorities responsible for implementing relevant laws.
Additional notifications are required when electronic monitoring and surveillance systems are used. These notifications need to include a description of which activities will be monitored, the method of monitoring, the data collected, the times and places where the monitoring will occur, and the purpose and justification for its necessity. The notification should also explain the functional decisions that may be affected by the monitoring.
In addition, it is necessary to notify workers if algorithms are used that affect their jobs or working conditions. These notifications should include a simple description of the algorithm, its purpose, the data it relies on, the type of output it produces, and how the employer will use this output to make decisions.
Ensuring Fairness in Performance Appraisal
Performance appraisal systems should not use any personal data of a worker, especially those that could lead to discrimination, such as gender, race, and religion. In general, performance appraisal systems should not rely in any way on data whose collection would require a violation of standards relating to protecting the right to privacy and personal data.
Performance appraisal systems should include appropriate human intervention in making career decisions, especially decisions that significantly impact a worker’s career. A responsible and accountable person should take any career decision to reward or punish.
The worker should receive clear and formal notification of any employment decision, whether rewarding or punishing. This notification should include the reasons for the decision, how it was reached, the data used in the evaluation process that led to it, and a simple description of any algorithm used in this process. An organization’s internal systems must include ways to appeal against job decisions made or contributed to by digital performance appraisal systems.
Competent authorities also need to review worker performance evaluation systems marketed by technology companies for general use or developed specifically for one or more clients to ensure that they comply with the required standards and are certified before being used. Regulators should also have the right to review the operation of these systems and ensure that they have not been significantly modified after they have been certified.
Workers in any economic establishment, through their union representatives or another representation mechanism, should have the right to accept or reject the use of any digital performance evaluation system before the employer begins using it in the workplace.
These worker representatives should have the opportunity to review the components of the performance evaluation system thoroughly, how it works, the data it will collect, how it will be processed, and a simplified description of any algorithms used.
Workers, individually or collectively, should be able to appeal the use of specific performance evaluation systems in their workplaces before an impartial body. This body should have the authority to require the employer to provide all information related to the performance evaluation system in question.
Additionally, the body should have the right to mandate actions such as modifying the performance evaluation system to comply with required standards, replacing it with another system, or compensating workers for any harm caused by using the evaluation system.
Conclusion
Workers’ rights increasingly depend on the ability to regulate the use of digital technology applications in the workplace. Many of these applications pose a threat to workers’ rights. Therefore, labor laws must address their current shortcomings to confront the new challenges posed by using digital technology in the workplace.
This paper sought to provide a brief overview of the issues surrounding using digital technology in the workplace and its impact on workers’ rights. In its first section, the paper discussed some of the most critical issues through three main themes: employment policies, surveillance and monitoring policies, and performance evaluation policies.
In its second section, the paper provided recommendations for the basic standards that should be included in labor laws or relevant regulatory frameworks to protect workers’ rights against the misuse of digital technology in the workplace. These recommendations covered areas such as ensuring fairness in recruitment policies, protecting the right to privacy, establishing transparency in using digital technology in the workplace, and ensuring fairness in performance evaluation policies.