Published Date : 6/9/2025
From Blade Runner to The Matrix, science fiction often depicts artificial intelligence (AI) as a mirror of human intelligence, capable of evolving and advancing on its own. However, the reality is far more complex and involves significant human labor. Zena Assaad, an expert in human-machine relationships, delves into the price we are willing to pay for this technology. n nThe original conceptions of AI, which date back to the early days of computer science, defined it as the replication of human intelligence in machines. This definition invites debate on the semantics of intelligence. Intelligence is not a neatly defined concept; some view it as the ability to remember information, others as good decision-making, and some as the nuances of emotions and interactions with others. n nReplicating this amorphous notion in a machine is incredibly challenging. Software, the foundation of AI, is binary by nature, consisting of 1s and 0s, true and false. This dichotomous design does not reflect the many shades of grey in human thinking and decision-making. Intent and reasoning, distinctly human qualities, are crucial. AI systems can have goals, but these are not the same as intent, which involves underlying purpose and motivation. n nReasoning involves logical and sensible consideration, drawing conclusions from old and new information. AI lacks this capacity, which challenges the feasibility of replicating human intelligence in a machine. Ethical principles and frameworks attempt to address the design and development of ethical machines, but if AI is not truly a replication of human intelligence, how can we hold these machines to human ethical standards? n nEthics is the study of morality, encompassing right and wrong, good and bad. Imparting ethics on a machine, which is distinctly not human, seems redundant. Ethics is amorphous, changing across time and place. What is ethical to one person may not be to another, and what was ethical five years ago may not be today. Machines cannot embody these human notions, and thus, they cannot be held to ethical standards. However, the people who make decisions for AI can and should be held to ethical standards. n nContrary to popular belief, technology does not develop on its own. Human beings design, develop, manufacture, deploy, and use these systems. If an AI system produces an incorrect or inappropriate output, it is due to a flaw in the design, not the machine being unethical. The concept of ethics is fundamentally human, and attributing human characteristics to technology creates misleading interpretations. n nDecades of messaging about synthetic humans and killer robots have shaped how we conceptualize the advancement of technology, particularly those that claim to replicate human intelligence. AI applications have scaled exponentially, with many tools being freely available to the public. However, this comes at a cost, often in the value of human intelligence. n nAt a basic level, AI works by finding patterns in data, a process that involves significant human labor. ChatGPT, a large language model (LLM), is trained on carefully labeled data, which adds context to what would otherwise be noise. Using labeled data to train an AI model is called supervised learning. Labeling an apple as 'apple', a spoon as 'spoon', and a dog as 'dog' helps contextualize these pieces of data into useful information. The more detailed the labels, the more accurate the matches. n nData is a combination of content (images, words, numbers) that requires context to become useful information. As the AI industry grows, there is a greater demand for more accurate products, achieved through more detailed and granular labels on training data. Data labeling is time-consuming and labor-intensive, essential to the development of AI models. Despite its importance, the work of data labelers often goes unnoticed and unrecognised. n nData labeling is primarily done by human experts from the Global South—Kenya, India, and the Philippines—where labor is cheaper. Data labelers are forced to work under stressful conditions, reviewing content depicting violence, self-harm, murder, and other disturbing content. They are pressured to meet high demands within short timeframes and earn as little as US$1.32 per hour, according to TIME magazine’s 2023 reporting. n nThese countries often have less legal and regulatory oversight of worker rights and working conditions. Similar to the fast fashion industry, cheap labor enables cheaply accessible products. AI tools are often free or cheap to access and use because costs are cut around the hidden labor that most people are unaware of. n nWhen thinking about the ethics of AI, the hidden labor in the supply chain is rarely discussed. People are more focused on the machine itself rather than how it was created. How a product is developed, be it an item of clothing, a TV, or an AI-enabled capability, has far-reaching societal and ethical impacts. n nIn today’s digital world, organizational incentives have shifted beyond revenue to include metrics around the number of users. Releasing free tools for public use exponentially scales user numbers and opens pathways for alternate revenue streams. This means greater access to technology tools at a fraction of the cost or even for free. Increased manufacturing has historically been accompanied by cost-cutting in both labor and quality. We accept poorer quality products because our expectations around consumption have changed. n nThe fast fashion industry is an example of hidden labor and its ease of acceptance by consumers. Between 1970 and 2020, the average British household decreased their annual spending on clothing despite buying 60% more pieces of clothing. The allure of cheap or free products often dispels ethical concerns around labor conditions. Similarly, the allure of intelligent machines has created a facade around how these tools are actually developed. n nArtificial intelligence technology cannot embody ethics, but the manner in which AI is designed, developed, and deployed can. In 2021, UNESCO released recommendations on the ethics of AI, focusing on the impacts of implementation and use. These recommendations do not address the hidden labor behind AI development. Misinterpretations of AI, particularly those that suggest it develops with a mind of its own, isolate the technology from the people designing, building, and deploying it. n nIf we want to achieve ethical AI, we need to embed ethical decision-making across the AI supply chain, from the data labelers who carefully and laboriously annotate and categorize data to the consumers who are accustomed to thinking that services should be free. Everything comes at a cost, and ethics is about what costs we are and are not willing to pay.
Q: What is the main challenge in replicating human intelligence in AI?
A: The main challenge is that human intelligence is an open and subjective concept, involving nuances such as intent and reasoning, which are difficult to replicate in a binary, machine-based system.
Q: Why is data labeling important for AI?
A: Data labeling is crucial because it adds context to raw data, making it useful for training AI models. Without labeled data, AI systems cannot accurately find patterns or make meaningful predictions.
Q: Who are the data labelers, and what are their working conditions?
A: Data labelers are often from the Global South, including countries like Kenya, India, and the Philippines. They work under stressful conditions, reviewing disturbing content, and earn very low wages.
Q: What ethical concerns are associated with AI development?
A: Ethical concerns include the hidden labor of data labelers, who work in poor conditions for low pay, and the broader societal impacts of how AI is designed, developed, and deployed.
Q: How can we achieve ethical AI?
A: Achieving ethical AI requires embedding ethical decision-making across the AI supply chain, from the data labelers who annotate data to the consumers who use the final products.