Published Date : 1/10/2025
You probably haven’t thought about philosophers John Locke or Thomas Hobbes since high school, or that required college class. They died centuries before personal computing was invented, let alone the artificial intelligence technologies that now infuse many parts of our daily life. What could they possibly teach Americans about living in a world with AI?
Turns out, quite a lot. The same thinkers who inspired America’s Constitution grappled with a fundamental question we’re facing now: How do you build a society where people can live together when new forces threaten to upend it?
From the Constitution’s checks and balances to the commerce clause to civil rights laws, America has repeatedly renewed its social contract when new technologies disrupted old arrangements. The telegraph, automobile, and internet all required old principles to be applied in new ways.
When cars created interstate commerce and safety challenges, for instance, federalism was not abandoned – national highway standards and traffic laws were created, while preserving state authority over local roads.
AI portends a massive transformation of our economy and society, and so it demands similar creativity. The thinkers who inspired the Constitution offer something like a user manual for the AI age. Here's how six ideas from the social contract tradition can help us navigate the socioeconomic challenges brought forth by AI.
When Machines Act Like People (But Faster)
Thomas Hobbes warned that without rules, life becomes chaos – everyone fighting everyone else for advantage that he called the “state of nature.” His solution? Create a government that can keep order.
Today’s AI agents may be Hobbes’ vision at digital speed. These systems can trade stocks, make hiring decisions, and may soon perform most work done on a computer. Without proper oversight, they’re operating in a digital state of nature.
Just like the Constitution’s commerce clause gave Congress power to regulate trade between states, federal authority over AI agents may be needed. That means systems to track who’s responsible when AI screws up, safety standards like those in place for cars and medicines, and kill switches ensuring that humans stay in control.
Without rules, you don’t get freedom, you get instability.
Your Rights Don't Disappear Just Because a Computer Says So
One of John Locke’s big ideas – the one that inspired the Bill of Rights – was that government power must have limits. You can’t just do whatever you want to people, even if you’re in charge.
This becomes urgent when government agencies use AI to decide who gets benefits or who poses a security risk. The Constitution doesn’t have an exception clause that reads “unless a computer says otherwise.”
Citizens need the same protections the Founding Fathers built against arbitrary power: transparency about how AI makes decisions, the right to appeal when systems get it wrong, and strict limits on surveillance.
Democracy Means You Get a Say
Jean-Jacques Rousseau believed that legitimate laws come from citizens working together to solve problems – not from elites imposing solutions. Our founders built this into the American system with town halls, elected representatives, and the right to petition.
Here artificial intelligence might even help. Taiwan uses AI-powered platforms to help citizens find consensus on divisive issues, discovering common ground across political divides.
American communities could use similar tools to democratically decide questions like how facial recognition gets used in public spaces, or whether AI should grade their kids’ homework.
These questions are too important for tech executives or bureaucrats to answer alone.
Designing Rules for an Uncertain Future
John Rawls, who died in 2002, proposed a thought experiment that suits the AI age: Design society’s rules as if you didn’t know whether you’d end up rich or poor, employed or automated away.
Behind this “veil of ignorance,” you wouldn’t gamble on being among the winners. You’d insist that if AI eliminates jobs, there are ways to secure everyone’s economic future. If algorithms make hiring decisions, they expand opportunity rather than entrench existing advantages. If AI creates vast wealth, the benefits don’t flow only to those who own the computer.
This isn’t socialism – it’s preparing for an uncertain future. Behind the veil, not knowing if you’ll be a tech CEO or a displaced cashier, you’d demand an economy where everyone can thrive, not just survive.
Finding Common Ground in Divided Times
Americans disagree about AI’s future – some see utopia and want acceleration, others see doom and push for a pause. But there are shared concerns.
AI is developing faster than government can respond, private companies control the technology, and nobody can perfectly predict what comes next.
AI governance can be built on these shared foundations and despite different values and political perspectives. That could mean regulations that adapt as technology evolves, partnerships that harness innovation while maintaining accountability, or international cooperation on safety that prevents a race to the bottom.
When Inequality Harms Democracy
Political theorist Danielle Allen warns that extreme inequality makes genuine democracy impossible. When some citizens lack basic security, they can’t participate as equals. This warning grows urgent as AI threatens to concentrate unprecedented power in few hands.
If a handful of companies control AI systems that replace millions of workers, they’ll wield influence that would have terrified the Founding Fathers, who designed the American system specifically to prevent monarchy and aristocracy. America needs modern antitrust enforcement, mechanisms ensuring affected communities have a voice in AI governance, and economic policies that spread AI’s benefits broadly.
The question isn’t whether to slow innovation, but how to ensure it strengthens rather than undermines the democratic equality the Constitution promises.
As technologies grow more capable, the question becomes fundamentally constitutional: Will AI be used to fulfill the promise of American democracy, or will it be allowed to create the kind of concentrated power that the founders designed the Constitution to prevent?
The philosophers might not have seen this coming, but they provided the tools to handle it. The question is whether America is wise enough to use them.
Q: What is the social contract theory and how does it apply to AI?
A: The social contract theory, developed by philosophers like Thomas Hobbes and John Locke, suggests that people agree to form a society with rules to ensure order and protect individual rights. In the context of AI, this theory can guide the creation of regulations and ethical standards to ensure AI benefits society while protecting individual rights and preventing chaos.
Q: Why is it important to have oversight of AI systems?
A: Oversight of AI systems is crucial to prevent them from operating in a 'digital state of nature' where they could cause harm or chaos. Proper oversight ensures accountability, safety, and the protection of individual rights, much like the rules and regulations that govern other aspects of society.
Q: How can AI help strengthen democracy?
A: AI can help strengthen democracy by providing tools for citizens to find consensus on divisive issues, ensuring transparency in decision-making, and enabling more inclusive and participatory governance processes. For example, AI-powered platforms can facilitate citizen engagement and help bridge political divides.
Q: What is the 'veil of ignorance' and how can it be applied to AI governance?
A: The 'veil of ignorance' is a concept proposed by John Rawls, suggesting that rules and policies should be designed as if one doesn't know their future position in society. In the context of AI, this means creating policies that ensure everyone's economic and social well-being, regardless of their initial position, to avoid exacerbating inequality.
Q: How can we ensure that AI benefits everyone and not just a few?
A: To ensure that AI benefits everyone, policies should focus on equitable distribution of AI-generated wealth, protection of workers from job displacement, and mechanisms for affected communities to have a voice in AI governance. This can include modern antitrust enforcement, economic policies that spread benefits broadly, and international cooperation on safety standards.