Published Date : 07/06/2025
UK ministers have delayed plans to regulate artificial intelligence (AI) by at least a year, aiming to introduce a more comprehensive bill that addresses safety, copyright, and other concerns.
Proposals to regulate artificial intelligence (AI) have been delayed by at least a year as UK ministers plan a more comprehensive bill to address various aspects of the technology, including safety and copyright. Peter Kyle, the technology secretary, intends to introduce this bill in the next parliamentary session, which is likely to take place in May 2026.
This delay has raised concerns about the timeline for regulating AI, a technology that is advancing rapidly. Labour had initially planned to introduce a shorter, narrowly drafted AI bill within months of entering office, focusing on large language models like ChatGPT. The legislation would have required companies to hand over their models for testing by the UK’s AI Security Institute, aiming to address concerns that advanced AI models could pose risks to humanity.
However, the bill was delayed as ministers opted to align with Donald Trump’s administration in the US, fearing that stringent regulation might weaken the UK’s appeal to AI companies. Now, the government plans to include copyright rules for AI companies as part of the comprehensive AI bill.
“We feel we can use that vehicle to find a solution on copyright,” a government source said. “We’ve been having meetings with both creators and tech people and there are interesting ideas on moving forward. That work will begin in earnest once the data bill passes.”
The government is currently embroiled in a standoff with the House of Lords over copyright rules in a separate data bill. This bill would allow AI companies to train their models using copyrighted material unless the rights holder opts out. This has sparked a fierce backlash from the creative sector, with prominent artists such as Elton John, Paul McCartney, and Kate Bush joining a campaign to oppose the changes.
This week, peers backed an amendment to the data bill that would require AI companies to disclose if they are using copyrighted material to train their models, aiming to enforce current copyright law. Despite this, ministers have refused to back down, even though Kyle has expressed regret about the government’s approach to the changes. The government insists the data bill is not the right vehicle for the copyright issue and has promised to publish an economic impact assessment and series of technical reports on copyright and AI issues.
In a letter to MPs on Saturday, Kyle made a further commitment to establish a cross-party working group of parliamentarians on AI and copyright. Beeban Kidron, the film director and cross-bench peer who has been campaigning on behalf of the creative sector, said on Friday that ministers “have shafted the creative industries, and they have proved willing to decimate the UK’s second-biggest industrial sector.”
Kyle told the Commons last month that AI and copyright should be dealt with as part of a separate “comprehensive” bill. Most of the UK public (88%) believe the government should have the power to stop the use of an AI product if it is deemed to pose a serious risk, according to a survey published by the Ada Lovelace Institute and the Alan Turing Institute in March. More than 75% said the government or regulators should oversee AI safety rather than private companies alone.
Scott Singer, an AI expert at the Carnegie Endowment for International Peace, said: “The UK is strategically positioning itself between the US and EU. Like the US, Britain is attempting to avoid overly aggressive regulation that could harm innovation while exploring ways to meaningfully protect consumers. That’s the balancing act here.”
Q: Why was the AI regulation bill delayed?
A: The AI regulation bill was delayed by at least a year to allow for a more comprehensive bill that addresses various aspects of AI, including safety and copyright. Ministers also wanted to align with the US administration to avoid overly aggressive regulation that could harm innovation.
Q: What are the main concerns about AI models like ChatGPT?
A: The main concerns about AI models like ChatGPT include the risk they pose to humanity if they become too advanced, the need for transparency in their development, and the potential misuse of copyrighted material in their training.
Q: How does the UK government plan to address copyright issues in AI?
A: The UK government plans to include copyright rules for AI companies as part of a comprehensive AI bill. They have also committed to publishing an economic impact assessment and technical reports on copyright and AI issues.
Q: What is the public's opinion on AI regulation?
A: According to a survey by the Ada Lovelace Institute and the Alan Turing Institute, 88% of the UK public believe the government should have the power to stop the use of an AI product if it poses a serious risk, and more than 75% support government or regulatory oversight of AI safety.
Q: How is the UK positioning itself in the global AI regulatory landscape?
A: The UK is strategically positioning itself between the US and EU. Like the US, the UK is attempting to avoid overly aggressive regulation that could harm innovation while exploring ways to meaningfully protect consumers and address ethical concerns.