Artificial intelligence (AI) is revolutionizing many aspects of human life, from providing rapid medical diagnoses to facilitating interaction between people regardless of their location. While it helps automate processes and save time and resources, the ethical implications of AI in the workplace are worth considering.
AI systems are developed technologies based on machine learning and data analysis, which have significant potential to improve diverse aspects of professional life. The recent emergence of tools such as ChatGPT has sparked great interest in the area, which has already made many companies consider things like AI recruiting tools in areas such as customer service and human resources.
However, the indiscriminate use of these solutions has ethical implications. When performing trend-based data analysis, the suggestions and predictions of these tools can replicate discriminatory biases stemming from stereotypical representations, violate people’s privacy, and even jeopardize their dignity and self-realization.
• Therefore, organizations such as UNESCO and the OECD, as well as some leaders in developing technological tools, have worked to recognize and address the human rights risks that AI can pose in the workplace, proposing policies employers should consider when implementing any AI solutions. These are the most relevant.
The ethical implications of AI in the workplace
AI is becoming a popular resource in companies across all industries as it helps improve efficiency, save time, and reduce costs in the processes where it is implemented. However, it also represents an ongoing concern due to the over-reliance on these tools and their impact on the workforce.
From job losses to biased or inaccurate information and vulnerability to employee privacy caused by these technological solutions, before implementing any system, it is best to consider the ethical implications of AI in the workplace:
Biases based on discriminatory tendencies
Although artificial intelligence can help reduce certain biases such as race, age, and gender during recruitment, it can also multiply and systematize existing human prejudices based on discriminatory tendencies.
While these systems are often sold as objective tools compared to traditional recruitment processes where a stereotypical human decision may be involved, they can also be manipulated to replicate these bad practices.
The biases caused by AI that infringe on ethics in the workplace are due to the choice of specific parameters and variables to train these systems. For example, by requiring an engineer, a gender or age preference can be made that would only benefit part of the population and not all people with knowledge and skills in the field.
Furthermore, according to the OECD working paper Using Artificial Intelligence in the Workplace: What are the main ethical risks?, automated discrimination is more abstract and unintuitive, subtle, intangible, and difficult to detect, which calls into question the legal protection offered by legislation in this area.
Vulnerability to privacy
Privacy is a human right that helps people establish boundaries to limit who has access to their bodies, places, and objects, as well as being essential for developing personality and protecting human dignity.
Data collection poses challenges in terms of respecting people’s privacy. From agreeing to terms and conditions for connecting to the corporate internet to using work tools, companies have access to a lot of information about their employees that, if not protected, can be exposed to a data breach.
There are also biometric tracking devices, such as fingerprint entry verifiers or facial recognition systems that track personal data. Many people use these to access personal applications, such as their banks, or to make payments for online services.
Also, due to the increase of remote work, there is more and more surveillance software —of which many employees are unaware— that activates the cameras and microphones of employees’ computers to find out if they are working or even track their private emails, network activity, and their location outside working hours.
Inequality of opportunities (massive layoffs)
From the popularization of the steam engine, electric power, and internal combustion motors to the internet boom, social networks, and technology solutions, machines have displaced thousands of workers from their workplaces.
While mechanized jobs have been the most threatened, with the arrival of artificial intelligence tools, many professionals in creative sectors fear losing their jobs, as these bots can solve tasks in a matter of seconds, such as writing texts, generating images, and even composing musical pieces.
Human talent can never be replaced, but by offering quick solutions that take people a significant amount of time, these tools have provoked in workers a feeling known as AI anxiety, fear of losing their job and being replaced by an artificial intelligence system.
Reducing worker autonomy and agency
When trained with the correct algorithms, artificial intelligence can assist in making workplace decisions, making predictions, and simplifying future processes. However, relying entirely on the results it yields can negatively affect workers’ autonomy and agency.
By relying on an application or software that tells employees how to do their work, whether mechanical or creative, workers can reduce their ability to innovate, undermining their dignity in the workplace.
When an employee is told how to think, their problem-solving ability is challenged, which can lead to feelings of rejection towards the organization they are in, lack of belonging, and, consequently, affect their daily performance and even motivate them to quit their job.
In addition to the reduction of autonomy, there is the lack of recognition of the sometimes called ‘ghost workers’ – students in the United States, workers in Canada, and even Venezuelan immigrants in Colombia who perform digital ‘piecework’: manual tagging of photos and videos, transcription of audio and categorization of the text so that the AI can make its magic.
Excessive pressure on workers
AI systems in the workplace can improve employee productivity by becoming tools that make their performance more efficient but also subject them to excessive accountability to meet specific performance evaluation parameters.
In various industries, whether face-to-face or remote, programs are being adopted that evaluate employees’ work in real-time, whether by their immediate bosses, managers, or even customers, increasing pressure and stress on the workforce.
In addition, these evaluation systems usually only consider the quantity of work and not the quality and are based on metrics and parameters that do not consider many variables that can influence a worker’s performance, which generates a sense of alienation and decreased employee commitment to the job.
According to the same OECD, the need for more transparency and explainability of decisions based on AI systems also contributes to reduced worker agency. For example, by not providing explanations for decisions that affect them, workers are unable to adapt their behavior to improve their performance.
Physical safety in the workplace
Artificial intelligence systems in the workplace could put people’s health and integrity at risk. While AI enables hazard and mental health monitoring of employees, it can also dehumanize workers and make them feel they have little control over their jobs, translating into distrust, anxiety, and physical and mental problems, such as musculoskeletal and cardiovascular disorders.
Although AI has been implemented to prevent and reduce workplace accidents, it only partially ends them, and relying only on these technological solutions to create safe workspaces could lead to more risks far from minimizing them.
In an accident caused by an AI failure, who is the legal responsible —the employer or the software developer? Questions like these have led to implementation of “AI audits” or “algorithmic audits” to evaluate AI systems and ensure they comply with the law or reliability principles.
While these auditing tools are still in development, they should address the full ethical implications of AI in the workplace, eliminate bias, care for employee privacy and dignity, and promote fairness, transparency, and accountability of the implemented systems.