AI for social protection: Take care of the people
AI for social protection: Take care of the people

AI for social protection: Take care of the people

The technology that allowed passengers to ride in elevators without an operator was tested and ready for deployment in the 1890s. But it was not until after the elevator operators’ strike in 1946 – which cost New York City $ 100 million – that automated elevators began to be installed. It took more than 50 years to convince people that they were as safe and comfortable as those served by humans. The promise of radical change from new technologies has often overshadowed the human factor that ultimately determines whether and when these technologies will be used.

Interest in artificial intelligence (AI) as an instrument to improve efficiency in the public sector is at a record high. This interest is motivated by the ambition to develop neutral, scientific and objective techniques for government decision-making (Harcourt 2018). By April 2021, governments in 19 European countries had launched national AI strategies. The role of AI in achieving the goals of sustainable development has recently attracted the attention of the international development community (Medaglia et al. 2021).

Proponents argue that artificial intelligence could radically improve the efficiency and quality of public services in education, health care, social protection and other sectors (Bullock 2019; Samoili and others 2020; of Sousa 2019; World Bank 2020). In the field of social protection, AI could be used to assess eligibility and needs, make enrollment decisions, provide benefits and monitor and manage benefit delivery (ADB 2020). Given these benefits and the fact that AI technology is readily available and relatively inexpensive, why has AI not been used extensively in social protection?

Large-scale applications of artificial intelligence in social protection have been limited. A study of Engstrom and others (2020) of 157 public sector use of AI by 64 US government agencies found seven cases related to social protection, where AI was mainly used to predict risk screening of referrals to child protection agencies (Chollechova and others 2018; Clayton and others 2019).

Only a handful of evaluations of artificial intelligence in social protection have been conducted, including assessments of homelessness (Toros and Flaming 2018), unemployment benefits (Niklas and others 2015), and child protection services (Hurley 2018; Brown and others 2019; Vogl 2020). Most of them were based on proofs-of-concept or pilots (ADB 2020). Examples of successful pilots include automation of Sweden’s social services (Ranerup and Henriskon 2020) and the Togolese Government’s experiments with machine learning using mobile phone metadata and satellite imagery to identify the households most in need of social assistance (Aiken and others 2021).

Some debacle has diminished public confidence. In 2016, Services Australia – an Australian government agency providing social, health and child benefits and payments – launched Robodebt, an AI-based system designed to calculate overpayments and issue debt notices to welfare recipients by matching data from social security payment systems and income data from the Australian Tax Office. The new system mistakenly sent more than 500,000 people $ 900 million worth of debt messages (Carney 2021). The failure of the Robodebt program has had ripple effects on public perception of the use of artificial intelligence in the social security administration.

In the United States, the Illinois Department of Children and Family Services stopped using predictive analytics in 2017, based on warnings from staff that the poor quality of the data and concerns about the purchasing process made the system unreliable. The Los Angeles Office of Child Protection completed its AI-based project, citing the “black box” nature of the algorithm and the high incidence of errors. Similar data quality problems hampered the use of a data-driven approach to identifying vulnerable children in Denmark (Jørgensen 2021), where a project was stopped in less than a year, even before it was fully implemented.

The human factor in the adoption of AI for social protection

Research into the use of artificial intelligence in social protection draws at least four cautionary tales about the risks involved and the consequences for people’s lives of algorithmic biases and errors.

Accountability and the “explanatory” problem: Public officials are often required to explain their decisions – such as why someone was denied benefits – to citizens (Gilman 2020). However, many AI-based results are opaque and not fully explanatory because they incorporate many factors into multi-step algorithmic processes (Selbst et al. 2018). An important consideration for promoting AI in social protection is how AI estimates fit into the welfare system’s regulatory, transparency, complaints and accountability frameworks. (Engstrom 2020). The broader risk is that without adequate complaint systems, automation can impoverish citizens, especially minorities and the disadvantaged, by treating citizens as analytical data points.

Data quality: The quality of administrative data greatly affects the efficiency of AI. In Canada, the poor quality of the data created errors that led to subordinate care placement and failure to remove children from unsafe environments (Vogl 2020). The tendency to favor older systems may undermine efforts to improve the data architecture (More and others 2017).

Misuse of integrated data: The use of artificial intelligence in social protection requires a high degree of data integration, which relies on data sharing across agencies and databases. In some cases, data exploitation may develop into data exploitation. For example, the Florida Department of Child and Family collected multidimensional data on students’ education, health, and home environment. However, this data has since been linked to the Sheriff’s Office’s records to identify and maintain a database of juveniles at risk of becoming productive offenders. In such cases, data integration creates new opportunities for controversial overreach that deviate from the intentions under which the data were originally collected. (tax 2021).

Responses from government officials: The introduction of AI should not assume that welfare officials can easily transform from pests and decision makers into managers of AI systems (Renerup and Henrisksen (2020) and Brown et al. (2019). The way government officials react to the introduction of AI-based systems can affect such system performance and lead to unforeseen consequences. In the US, police officers have been found to ignore the recommendations of the predictable algorithms or use this information in ways that could impair the system’s performance and violate assumptions about its accuracy (Garvie 2019).

Public response and public trust: The use of artificial intelligence to make decisions and assessments about the provision of social services can exacerbate inclusion and exclusion errors due to data-driven biases and ethical concerns about accountability for life-changing decisions (Ohlenburg 2020). Therefore, it is crucial to build trust in AI to upscale its use in social protection. However, a survey of Americans shows that nearly 80 percent of respondents do not trust the ability of governmental organizations to control the development and use of AI technologies (Zhang and Dafoe 2019). These concerns fuel the growing efforts to counter the potential threats of AI-based systems to humans and communities. For example, AI-based risk assessments are challenged due to fair trial, such as in denying housing and public services in New York (Richardson 2019). Mikhaylov, Esteve and Campion (2018) argues that in order for governments to use artificial intelligence in their public services, they must promote its public acceptance.

The future of artificial intelligence in social protection

Too few studies have been conducted to suggest a clear path to scale the use of AI in social protection. But it is clear that the system design must take into account the human factor. Successful use of artificial intelligence in social protection requires an explicit institutional redesign, not just tool-like acquisition of artificial intelligence in the purely information technology sense. Effective use of artificial intelligence requires coordination and development of the system’s legal, governance, ethical and accountability components. Fully autonomous AI estimates may not be appropriate; a hybrid system in which AI is used in conjunction with traditional systems may be better at reducing risks and encouraging adoption (Chollechova and others 2018; Ranerup and Henrikson 2020; Wenger and Wilkins 2009; Sansone 2021).

The international development institutions could help countries address the people-centered challenges in the public sector as part of new technology adoption. It is their comparative advantage over the technology sector. Investing in bottleneck research using artificial intelligence for social protection can yield high development returns.

Leave a Reply

Your email address will not be published.