Global: Governments’ adoption of unchecked technologies in social protection systems undermines rights 

 

Digital technologies including artificial intelligence, automation, and algorithmic decision-making are exacerbating inequalities in social protection systems across the world, said Amnesty International in a new briefing today.  

“From Serbia, to India, to the Netherlands, these technologies are hailed as cutting-edge solutions by governments to achieve a better distribution of resources, improve administrative systems, detect fraud, and enhance security. However, Amnesty International’s research has shown that the unchecked digitization of social protection systems poses many risks to human rights, and exacerbates inequalities,” said Imogen Richmond-Bishop, Technology & Economic, Social and Cultural Rights Researcher at Amnesty Tech.  

“In the face of multiple global crises caused by conflict, the climate emergency, and the Covid-19 pandemic, among others, robust social protection systems are more critical than ever to protect individuals and communities against income insecurity.” 

Social Protection in the Digital Age highlights the human rights risks posed by the unchecked use of digital technologies.  

Communities that will be impacted by a system must be consulted and any changes to these vital support systems must be communicated in a clear and accessible way. Crucially if a system is found to have the potential to cause human rights harms and that harm cannot be effectively prevented, it must never be deployed.

Imogen Richmond-Bishop, Technology & Economic, Social and Cultural Rights Researcher, Amnesty Tech

Serbia’s Social Card registry is a case in point of a semi-automated decision-making system being used in an already flawed social security landscape, directly impacting  already marginalized communities, as outlined in a 2023 Amnesty International report. The technology, which was rolled out without adequate safeguards or human rights protections, had a particular impact on Roma communities and people with disabilities. 

The system widened existing discrimination and created additional barriers for people to access their right to social security. Many of the people that Amnesty International interviewed were stripped of their meagre cash assistance, which meant that their access to essential goods and services, such as food and medicine, was halted due to flaws with the Social Card registry.  

  

In India the Aadhaar biometric identification system–which provides a unique identification number to citizens and residents including children–is used as a way of verifying and authenticating identity information across many public services, including for social security benefits, food rations, among others. This is done using wholly digitalized methods. Journalists and civil society researchers amongst others have all documented how this has led to the exclusion of many from vital social protection.  

“It is essential that before technology is introduced into social protection systems, states carefully consider and weigh its deployment against potential human rights risks. The introduction of any technology must be accompanied by independent and robust human rights impact assessments throughout the system’s lifecycle, from design to deployment, and effective mitigation measures must be in place,” said Imogen Richmond-Bishop. 

“Communities that will be impacted by a system must be consulted and any changes to these vital support systems must be communicated in a clear and accessible way. Crucially if a system is found to have the potential to cause human rights harms and that harm cannot be effectively prevented, it must never be deployed.” 

Background

In 2023, Amnesty International’s research, Trapped by Automation: Poverty and Discrimination in Serbia’s Welfare State, documented how many people, particularly Roma and those with disabilities, were unable pay bills, put food on the table, and struggled to make ends meet after being removed from social assistance support following the introduction of the Social Card registry. 

In 2021, Amnesty International documented how an algorithmic system used by the Dutch tax authorities had racially profiled recipients of childcare benefits. The tool was supposed to ascertain whether benefit claims were genuine or fraudulent, but the system wrongly penalized thousands of parents from low-income and immigrant backgrounds, plunging them into exorbitant debt and poverty.