If any government is going of, by, and for the people, then we can’t have unaccountable black box algorithms making important decisions. This is a welcome move.

Source: The Guardian

Artificial intelligence and algorithmic tools used by central government are to be published on a public register after warnings they can contain “entrenched” racism and bias.

Officials confirmed this weekend that tools challenged by campaigners over alleged secrecy and a risk of bias will be named shortly. The technology has been used for a range of purposes, from trying to detect sham marriages to rooting out fraud and error in benefit claims.

[…]

In August 2020, the Home Office agreed to stop using a computer algorithm to help sort visa applications after it was claimed it contained “entrenched racism and bias”. Officials suspended the algorithm after a legal challenge by the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove.

It was claimed by Foxglove that some nationalities were automatically given a “red” traffic-light risk score, and those people were more likely to be denied a visa. It said the process amounted to racial discrimination.

[…]

Departments are likely to face further calls to reveal more details on how their AI systems work and the measures taken to reduce the risk of bias. The DWP is using AI to detect potential fraud in advance claims for universal credit, and has more in development to detect fraud in other areas.

UK Visas & Immigration sign