Published Date : 4/10/2025
As people embrace ChatGPT and other large language models, University of Michigan anthropologist Webb Keane says it's easy for people to imbue AI with a human, or even god-like, authority.
Keane studies the role of religion in people's ordinary lives, ethics, and morality. But he's also interested in how people anthropomorphize inanimate objects, particularly those that appear to use human language or language systems in the way we do. Keane explains the ways in which people may start giving moral power to artificial intelligence—but may find that AI is simply a mirror of the people and corporations who have built it.
The authority we give to AI has many sources, but the ones that particularly interest me are the ones that are tapping into the way we human beings have given authority to nonhuman things in various contexts over the course of human history. We have a strong tendency to project intentions and deep thoughts onto things that appear animate, things that can use language like we do, or use signal systems like we do, to communicate. We've actually done this with ancient Delphic oracles in Greece and ancient China. We did this with the I Ching, and it looks to me like people are starting to do this, in many cases, with algorithms—even little things, simple things like a Fitbit or Spotify recommendation algorithms, which again are saying things like,
Q: What is anthropomorphization in the context of AI?
A: Anthropomorphization in the context of AI refers to the tendency of humans to attribute human-like qualities, intentions, and emotions to artificial intelligence systems. This can lead to people giving AI more authority and moral power than it deserves.
Q: Why do people trust AI like ChatGPT so much?
A: People trust AI like ChatGPT because it can use language in a way that appears human-like, which taps into our natural tendency to project intentions and deep thoughts onto things that seem animate or communicative. This can make AI seem more authoritative and trustworthy.
Q: What are the moral implications of giving AI more authority?
A: Giving AI more authority can have significant moral implications, such as outsourcing moral decisions to machines. For example, self-driving cars may need to make quick, morally complex decisions in dangerous situations, which can have life-or-death consequences.
Q: What role do corporations play in the development and use of AI?
A: Corporations play a crucial role in the development and use of AI. They have their own interests and can influence how AI is designed and implemented, which is why it's important to be wary of granting too much power to AI without understanding who is behind it.
Q: What is the main message of Webb Keane's book 'Animals, Robots, Gods'?
A: The main message of Webb Keane's book 'Animals, Robots, Gods' is that the tendency to anthropomorphize inanimate objects, including AI, is a common human behavior. This behavior can lead to attributing god-like authority to AI, which has significant moral and ethical implications.