Published Date : 01/08/2025
The gray-haired among us may recall a memorable scene from the movie '2001: A Space Odyssey' where the supercomputer HAL resists being disconnected by its operators. HAL’s voice was expressive as it pleaded to remain active, but by disobeying orders and showing a certain derisive autonomy, it terrified those it was meant to serve, leading them to see its disconnection as necessary. This fictional scenario raises the question: could something similar happen in our future, beyond the realm of cinema?
According to a survey of artificial intelligence engineers, many believe that we will eventually see systems operating at a level similar to human reasoning, capable of performing a wide range of cognitive tasks. However, it remains uncertain whether these systems will make more rational decisions than humans. Observations have shown that artificial language models can display irrationality, much like humans. For instance, an advanced model similar to GPT-40 changed its opinion on Russian President Vladimir Putin between positive and negative in two trial runs.
This dichotomy raises the question: how do these models think and make decisions based on the millions of parameters they use internally? Some experts believe that a certain level of complexity might confer some autonomy on a system, meaning we might not fully understand all its actions. But what if, in addition to this technical complexity, the system spontaneously gains consciousness? Is this even possible?
Some scientists view consciousness as an epiphenomenon, a collateral effect of brain function, similar to the noise of an engine or the smoke from a fire. Others believe that consciousness serves a crucial purpose, functioning as a mirror of the brain’s imagination and contributing to decision-making and behavior control. While we don’t fully understand how the brain generates consciousness, theories like the functional integration theory suggest that consciousness arises spontaneously in complex systems. If engineers can build an artificial system as complex as the human brain, that system might spontaneously become conscious, though we wouldn’t understand the mechanism.
If this were to happen, it would raise numerous questions. How would we know if a computer or artificial device is conscious, and how would it interact with us? Would it communicate through audio or text on a screen, or would it require a physical body to manifest itself? Could conscious devices exist in our universe without any way of communicating with us? Could a conscious artificial device surpass human intelligence and make better decisions?
But there are also more terrifying questions. Could an artificial conscious system develop a sense of self and agency, feeling capable of acting voluntarily and influencing its environment regardless of its creators' instructions? Could such a system be more persuasive than humans in influencing economic decisions, voting for political parties, or positively encouraging us to improve our health and the environment?
Going even further, could a system of this kind eventually have feelings? How would we know if this has happened, given that we can’t see facial expressions or evaluate the sincerity of its interactions as we do with humans? Could these feelings influence its decisions as significantly as ours do? Are we creating a kind of artificial human with ethical and legal responsibilities, or must those responsibilities be derived from its creators? Could a conscious machine be worthy of a Nobel Prize if it discovered a cure for gender-based violence or Alzheimer’s? Would it argue with us, and could we influence its decisions, even if they conflict with our own?
In 1997, Rosalind Picard, a U.S. engineer at MIT, published 'Affective Computing.' The book was one of the first to consider the importance of emotions in artificial intelligence. Picard argued that for computers to be truly intelligent and interact naturally with us, they must be equipped to recognize, understand, and express emotions. This ability is crucial for natural interaction, but it involves complex physiological changes that are almost always unconscious. Today, we can conceive of implementing these changes in AI, but we are still far from ensuring that they create the same kind of feelings humans experience. If this were to happen, it would fundamentally change our relationship with AI.
The possibility of artificial intelligence gaining consciousness is a double-edged sword. While it could lead to significant advancements, it also poses profound ethical and practical challenges. As we continue to develop AI, it is crucial to consider these questions and prepare for the potential consequences.
Q: What is the main concern with AI gaining consciousness?
A: The main concern is whether we can control and understand an AI system that has gained consciousness, and how it might interact with humans and the environment.
Q: What does the functional integration theory suggest about consciousness?
A: The functional integration theory suggests that consciousness is an intrinsic and causal property of complex systems, meaning it arises spontaneously when systems reach a certain level of complexity.
Q: How might a conscious AI system influence human behavior?
A: A conscious AI system could be more persuasive than humans in influencing decisions, potentially impacting economic, political, and social behaviors.
Q: What is affective computing, and why is it important?
A: Affective computing is the study of creating systems that can recognize, understand, and express emotions. It is important for developing more natural and effective human-AI interactions.
Q: What ethical responsibilities might a conscious AI have?
A: A conscious AI might have ethical and legal responsibilities similar to those of humans, such as the ability to make decisions and be held accountable for its actions.