Published Date : 15-06-2025
The UK government’s artificial intelligence (AI) tool, known as Humphrey, is built on models from OpenAI, Anthropic, and Google, revealing the increasing reliance on big tech within the public sector. Ministers have pinned their hopes on rolling out AI across the civil service to enhance efficiency, with plans to train all officials in England and Wales in the toolkit.
Ministers have staked the future of civil service reform on the widespread adoption of AI. The goal is to improve efficiency and allow civil servants to focus on more critical tasks. However, it is understood that the government does not have overarching commercial agreements with the big tech companies for AI and instead uses a pay-as-you-go model through its existing cloud contracts. This approach allows the government to switch between tools as they improve and become more competitive.
Critics are concerned about the rapid and extensive integration of big tech AI into the heart of government, especially given the ongoing public debate about the technology’s use of copyrighted material. The government has been embroiled in a battle with critics in the House of Lords over whether AI is unfairly being trained on creative material without credit or compensation. The Data Bill, which allows copyrighted material to be used unless the rights holder opts out, passed its final stage this week, despite strong opposition from those advocating for stronger protections.
The issue has sparked a fierce backlash from the creative sector, with artists such as Elton John, Tom Stoppard, Paul McCartney, and Kate Bush joining a campaign to protect copyrighted material. A freedom of information request revealed that the government’s Consult, Lex, and Parlex tools, designed to analyze consultations and legislative changes, use base models from OpenAI’s GPT, while the Redbox tool, which assists civil servants with everyday tasks like preparing briefs, uses OpenAI GPT, Anthropic’s Claude, and Google Gemini.
Ed Newton-Rex, the chief executive of Fairly Trained and a campaigner against AI being trained on copyrighted material, obtained the FoI and expressed concerns about the conflict of interest. He stated, “The government can’t effectively regulate these companies if it is simultaneously baking them into its inner workings as rapidly as possible. These AI models are built via the unpaid exploitation of creatives’ work. AI makes a ton of mistakes, so we should expect these mistakes to start showing up in the government’s work. AI is so well known for ‘hallucinating’ – that is, getting things wrong – that I think the government should be keeping transparent records of Humphrey’s mistakes, so that its continuing use can be periodically reevaluated.”
Shami Chakrabarti, a Labour peer and civil liberties campaigner, also urged caution, highlighting the potential for biases and inaccuracies. She referenced the Horizon computer system, which led to the miscarriage of justice for post office operators, as a cautionary tale.
Whitehall sources explained that the Humphrey tools work in different ways, and users can take various approaches to tackle inaccuracies, or “hallucinations.” The government continuously publishes evaluations about the accuracy of the technology in trials. An AI playbook for government also provides guidance to help officials use the technology effectively and ensure people have control over decisions at the right stages.
The costs of using AI in government are expected to grow as Humphrey is further rolled out, but officials note that the prices of AI per-use in the industry have trended downwards, as models become more efficient. For example, big projects such as the Scottish government’s use of AI to analyze consultation responses have cost less than £50 and saved many hours of work. Using the government’s AI Minute software to take notes for a one-hour meeting costs less than 50p and saves officials an hour of admin each time.
A spokesperson from the Department for Science, Innovation and Technology said, “AI has immense potential to make public services more efficient by completing basic admin tasks, allowing experts to focus on the important work they are hired to deliver. Our use of this technology in no way limits our ability to regulate it, just as the NHS both procures medicines and robustly regulates them. Humphrey, our package of AI tools for civil servants, is built by AI experts in government – keeping costs low as we experiment with what works best.”
When the Guardian asked ChatGPT about the base models used for the Humphrey AI toolkit and whether OpenAI was involved, it replied that the information was not available. At the time the tool was announced earlier this year, the government said its strategy for spending £23bn a year on technology contracts would be changed to boost opportunities for smaller tech startups.
Q: What is the Humphrey AI tool?
A: Humphrey is an AI tool developed by the UK government to improve efficiency in the civil service. It uses models from OpenAI, Anthropic, and Google.
Q: Why is there concern about the Humphrey AI tool?
A: There are concerns about the government's reliance on big tech companies and the ethical use of copyrighted material in AI models.
Q: How does the government plan to use Humphrey?
A: The government plans to roll out Humphrey across the public sector to improve efficiency and allow civil servants to focus on more critical tasks.
Q: What is the controversy surrounding AI and copyrighted material?
A: The controversy centers around whether AI is unfairly trained on creative material without credit or compensation, leading to a backlash from the creative sector.
Q: How is the government addressing the potential risks of AI inaccuracies?
A: The government is continuously evaluating the accuracy of AI tools and has published guidance to help officials use the technology effectively and ensure control over decisions.