UK Gov to Release AI Tools Registry Addressing Bias Issues

Published Date: 26/08/2024

Artificial intelligence and algorithmic tools used by central government to be published on a public register due to warnings of 'entrenched' racism and bias.

The UK government has announced plans to publish a register of artificial intelligence (AI) tools used by central government, following warnings that these tools can contain 'entrenched' racism and bias.


The move comes after campaigners challenged the deployment of AI in central government, citing concerns over secrecy and bias. The technology has been used for a range of purposes, including detecting sham marriages and rooting out fraud and error in benefit claims.


Caroline Selman, a senior research fellow at the Public Law Project (PLP), an access-to-justice charity, welcomed the move, stating that there had been a lack of transparency on the existence, details, and deployment of the systems.


'We need to make sure public bodies are publishing the information about these tools, which are being rapidly rolled out,' she said. 'It is in everyone's interest that the technology which is adopted is lawful, fair and non-discriminatory.'


The Home Office agreed to stop using a computer algorithm to help sort visa applications in August 2020, after it was claimed that the algorithm contained 'entrenched racism and bias'. The algorithm was suspended after a legal challenge by the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove.


In another instance, the PLP challenged an algorithmic tool used to detect sham marriages, which appeared to discriminate against people from certain countries.


The government's Centre for Data Ethics and Innovation, now the Responsible Technology Adoption Unit, warned in a report in November 2020 that there were numerous examples where the new technology had 'entrenched or amplified historic biases, or even created new forms of bias or unfairness'.


The centre helped develop an algorithmic transparency recording standard in November 2021 for public bodies deploying AI and algorithmic tools. The standard proposed that models which interact with the public or have a significant influence on decisions be published on a register or 'repository', with details on how and why they were being used.


To date, just nine records have been published in three years on the repository. None of the models is operated by the Home Office or Department for Work and Pensions (DWP), which have operated some of the most controversial systems.


The Department for Science, Innovation and Technology (DSIT) confirmed that departments would now report on the use of the technology under the standard. A DSIT spokesperson stated that technology has huge potential to improve public services, but it's essential to maintain the right safeguards, including human oversight and other forms of governance.


Departments are likely to face further calls to reveal more details on how their AI systems work and the measures taken to reduce the risk of bias. The DWP is using AI to detect potential fraud in advance claims for universal credit and has more in development to detect fraud in other areas.


The PLP is supporting possible legal action against the DWP over the use of the technology, pressing the department for details on how it is being used and the measures taken to mitigate harm. The project has compiled its own register of automated decision-making tools in government, with 55 tools tracked to date.


FAQS:

Q: What is the UK government's plan regarding AI tools used by central government?

A: The UK government plans to publish a register of AI tools used by central government, following warnings of 'entrenched' racism and bias.


Q: What is the purpose of the algorithmic transparency recording standard?

A: The standard proposes that models which interact with the public or have a significant influence on decisions be published on a register or 'repository', with details on how and why they were being used.


Q: How many records have been published on the repository to date?

A: Just nine records have been published in three years on the repository.


Q: Which departments have operated some of the most controversial AI systems?

A: The Home Office and Department for Work and Pensions (DWP) have operated some of the most controversial systems.


Q: What is the PLP's stance on the use of AI by the DWP?

A: The PLP is supporting possible legal action against the DWP over the use of the technology, pressing the department for details on how it is being used and the measures taken to mitigate harm.


More Topics :