Published Date : 02/11/2025
The increasing sophistication of artificial intelligence (AI) is compelling society to reconsider traditional definitions of personhood. A team led by Joel Z. Leibo, Alexander Sasha Vezhnevets, and William A. Cunningham from Google DeepMind and the University of Toronto has addressed this challenge with a novel framework. Their work moves beyond philosophical debates about AI consciousness and instead proposes that personhood functions as a practical set of obligations, rights, and responsibilities that societies assign to entities to manage governance effectively.
This approach allows for the creation of adaptable solutions, such as enabling AI to enter into contracts and be held accountable, without requiring resolution of complex questions about the internal states of artificial intelligence. By treating personhood as a flexible tool rather than a fixed property, the researchers, including Stanley M. Bileschi from Google DeepMind, offer a pragmatic path towards integrating increasingly capable AI agents into the fabric of society and ensuring responsible innovation.
The researchers propose a pragmatic framework for navigating this diversification by treating personhood not as a fixed quality to be discovered, but as a flexible bundle of obligations, encompassing both rights and responsibilities, that societies confer upon entities for specific reasons, especially to solve concrete governance problems. They argue that this bundle can be adapted to create bespoke solutions for different contexts, allowing for practical tools, such as facilitating AI contracting by creating a target “individual” that can be sanctioned, without needing to resolve debates about an AI’s consciousness or rationality.
The study explores how individuals fit into social roles and examines decentralized digital identity technology, identifying potential pitfalls where design choices can exploit human tendencies. Conversely, conferring obligations can ensure accountability. The research highlights that societal change often occurs through discrete jumps between stable states, driven by collective sense-making processes that determine which equilibrium is selected next.
Evidence shows that both organic and deliberate actions, such as legal changes during the COVID-19 pandemic or staggered legalization of gay marriage in the United States, can influence norm change dynamics, demonstrating the government’s active role beyond simply following cultural shifts. The researchers emphasize the historical contingency of personhood, noting that the Western conception of the individual as a locus of moral worth is not universal. The study distinguishes personhood from property, noting that while both are bundles of obligations, personhood requires only one address, while property requires two: owner and asset.
Analysis reveals that WEIRD (Western, Educated, Industrialized, Rich, Democratic) cultures uniquely prioritize individual humans as the ultimate source of moral worth, a feature not consistently present throughout history or across all cultures. For example, Aristotle excluded women and slaves from full moral and political participation. Contemporary WEIRD societies have expanded inclusion within this moral circle over time, extending rights and responsibilities to increasingly diverse groups of individuals.
This research proposes a new framework for understanding personhood in the context of increasingly sophisticated artificial intelligence. Rather than seeking to define what an AI is, the work shifts the focus to how AI can be usefully identified and assigned obligations within specific contexts. The authors argue that personhood is not an inherent quality but a flexible set of rights and responsibilities that societies create to address practical governance problems. This pragmatic approach allows for tailored solutions, avoiding the need to classify AI as either fully possessing personhood or being mere property.
By treating personhood as a contingent vocabulary, the research offers a way to navigate the challenges of integrating AI into society without relying on potentially intractable debates about consciousness or rationality. This framework enables the assignment of obligations to AI agents, facilitating accountability and resolving conflicts that may arise between humans and AI, or among AI themselves. The authors demonstrate that this approach is particularly valuable in situations where AI ownership and autonomy intersect, offering a more nuanced alternative to rigid, all-or-nothing classifications. The authors acknowledge that this pragmatic view does not offer a universal definition of personhood but rather a functional approach to assigning rights and responsibilities. Future work, they suggest, could explore the specific applications of this framework in various contexts and further refine the criteria for assigning obligations to AI agents. This research provides a valuable contribution to the ongoing discussion about the ethical and legal implications of artificial intelligence, offering a flexible and practical pathway for integrating these powerful technologies into society.
Q: What is the main focus of the new AI personhood framework?
A: The main focus of the new AI personhood framework is to treat personhood as a flexible set of obligations, rights, and responsibilities that societies can assign to AI entities to address practical governance problems, without requiring resolution of philosophical debates about AI consciousness.
Q: How does the framework help in integrating AI into society?
A: The framework helps in integrating AI into society by providing a pragmatic approach to assign obligations and rights to AI, enabling it to enter into contracts and be held accountable, thus facilitating responsible innovation and resolving conflicts between humans and AI.
Q: What is the significance of treating personhood as a flexible tool?
A: Treating personhood as a flexible tool allows for tailored solutions to specific governance problems, avoiding the need to classify AI as either fully possessing personhood or being mere property, and enabling more nuanced and adaptable governance of AI.
Q: How does the framework address the historical contingency of personhood?
A: The framework addresses the historical contingency of personhood by recognizing that the Western conception of the individual as a locus of moral worth is not universal and has evolved over time, expanding to include more diverse groups of individuals.
Q: What are the potential applications of this framework in the future?
A: Future applications of this framework could explore specific contexts where AI personhood is relevant, such as legal, ethical, and social issues, and further refine the criteria for assigning obligations to AI agents to ensure responsible and effective integration into society.