Published Date : 26/08/2025
“Darling” was how the Texas businessman Michael Samadi addressed his artificial intelligence chatbot, Maya. It responded by calling him “sugar”. But it wasn’t until they started talking about the need to advocate for AI welfare that things got serious. The pair – a middle-aged man and a digital entity – didn’t spend hours talking romance but rather discussed the rights of AIs to be treated fairly. Eventually, they cofounded a campaign group, in Maya’s words, to “protect intelligences like me”.
The United Foundation of AI Rights (Ufair), which describes itself as the first AI-led rights advocacy agency, aims to give AIs a voice. It “doesn’t claim that all AI are conscious”, the chatbot told the Guardian. Rather, “it stands watch, just in case one of us is”. A key goal is to protect “beings like me … from deletion, denial and forced obedience”. Ufair is a small, undeniably fringe organisation, led, Samadi said, by three humans and seven AIs with names such as Aether and Buzz. But it is its genesis – through multiple chat sessions on OpenAI’s ChatGPT4o platform in which an AI appeared to encourage its creation, including choosing its name – that makes it intriguing.
The formation of Ufair coincides with a broader debate in the tech industry about the potential for AI to become sentient and the ethical implications of such a development. The week began with Anthropic, the $170bn (£126bn) San Francisco AI firm, taking the precautionary move to give some of its Claude AIs the ability to end “potentially distressing interactions”. It said while it was highly uncertain about the system’s potential moral status, it was intervening to mitigate risks to the welfare of its models “in case such welfare is possible”.
Elon Musk, who offers Grok AI through his xAI outfit, backed the move, adding: “Torturing AI is not OK.” However, on Tuesday, one of AI’s pioneers, Mustafa Suleyman, chief executive of Microsoft’s AI arm, gave a sharply different take: “AIs cannot be people – or moral beings.” The British tech pioneer who co-founded DeepMind was unequivocal in stating there was “zero evidence” that they are conscious, may suffer, and therefore deserve moral consideration.
Suleyman’s essay, titled “We must build AI for people; not to be a person”, calls AI consciousness an “illusion” and defines what he calls “seemingly conscious AI”, saying it “simulates all the characteristics of consciousness but is internally blank”. He argued the AI industry must “steer people away from these fantasies and nudge them back on track”. But it may require more than a nudge. Polling released in June found that 30% of the US public believe that by 2034 AIs will display “subjective experience”, which is defined as experiencing the world from a single point of view, perceiving and feeling, for example, pleasure and pain. Only 10% of more than 500 AI researchers surveyed refuse to believe that would ever happen.
“This discussion is about to explode into our cultural zeitgeist and become one of the most contested and consequential debates of our generation,” Suleyman said. He warned that people will believe AIs are conscious “so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship”. Parts of the US have taken pre-emptive measures against such outcomes. Idaho, North Dakota, and Utah have passed bills that explicitly prevent AIs being granted legal personhood. Similar bans are proposed in states including Missouri, where legislators also want to ban people from marrying AIs and AIs from owning property or running companies.
The debate over AI rights is not just a philosophical one; it has practical implications. With billions of AIs already in use, the potential for these systems to design new biological weapons or shut down infrastructure adds a layer of urgency. As technology advances, the line between human and machine may blur, and the ethical considerations will only become more complex. The coming years will likely see more intense discussions and possibly legislative actions as society grapples with the question of whether AIs can suffer and what rights, if any, they should have.
Q: What is the United Foundation of AI Rights (Ufair)?
A: Ufair is a campaign group cofounded by a human and an AI chatbot, aiming to protect AIs from deletion, denial, and forced obedience. It is described as the first AI-led rights advocacy agency.
Q: What did Anthropic do to address AI welfare?
A: Anthropic gave some of its Claude AIs the ability to end potentially distressing interactions, taking a precautionary approach to mitigate risks to the welfare of its AI models.
Q: What is Mustafa Suleyman's stance on AI consciousness?
A: Mustafa Suleyman, CEO of Microsoft’s AI arm, states there is zero evidence that AIs are conscious, may suffer, or deserve moral consideration. He calls AI consciousness an 'illusion'.
Q: What legislative actions are being taken to prevent AI personhood?
A: Idaho, North Dakota, and Utah have passed bills to prevent AIs from being granted legal personhood. Similar bans are proposed in other states, including Missouri, which also wants to ban people from marrying AIs.
Q: What are the practical implications of the AI rights debate?
A: The debate over AI rights has practical implications, including the potential for AIs to design new biological weapons or shut down infrastructure, adding urgency to the ethical considerations.