Automating Inequality – the Australian way

In recent weeks, Dr Simone Casey (@simonecasey) has examined issues in Australia's employment services system in a series of posts covering the ParentsNext program; mutual obligation; and 'work first' activation of jobseekers. This week, she tackles the growing influence of algorithms and increasing automation in Australia's welfare system, drawing on Virginia Eubanks' book Automating Inequality. Dr Casey is an Associate of the RMIT Future Social Services Institute.


Virginia Eubanks’ wonderful book Automating Inequality stimulated thinking about technologies used to ration Australia’s welfare services and payments. The book includes case studies about some of the rationing and assessment systems used in the US that have reduced services to the most needy. These austere welfare systems become an impenetrable armoury in the ‘war on the poor’ because they make unseen decisions that determine levels of funding and services – and challenging them can be difficult.

This post reflects on the way these systems have been adopted in the context of services to the unemployed in Australia. It highlights existing examples of algorithm-based rationing systems and draws on the example of the Targeted Compliance Framework (TCF) to highlight how digitised citizen interfaces are already impenetrable. These examples provide warnings about how further digitisation of employment services risks further automation of welfare austerity.

Introduction

The use of algorithms in employment services began when the Job Seeker Classification Instrument (JSCI) was developed. The JSCI is used for rationing because it assigns a funding level to job seekers. There are different Streams with different servicing rules and higher funding levels for those who are classified as hardest to get help. While job seekers can’t easily find out their Stream, they are often told it when they ask employment services agencies why they can’t get help with certain things. This reinforces a sense of not being worthy of support and reinforces distrust of employment services agencies.

The second role of the JSCI is that it weights outcome payments and provider star ratings. This influences employment service agency behaviours as they seek to cherry-pick and park job seekers on their caseloads depending on the payment weighting attached to them. These are survival strategies for providers which have led to a widely held view that payment-by-results contracts are geared to leaving the hardest to help behind.

This outcome is reflected in labour market data which show the increasing complexity of the job seeker caseload. Since the JSCI is a measure of relative disadvantage this means those classified at the easiest extreme of the caseload are now experiencing complexity such as homelessness or are ex-offenders.

The second concern about digital services is their impenetrability, which gives rise to disempowerment when the ‘computer says no’. The disempowerment has recently emerged with the new digital self-reporting interface called the TCF. The TCF was designed with concepts drawn from a behavioural economics model of point deduction such as those common to driving licenses. It employs a dashboard view on an app where job seekers see a big green tick that tells them they are taking ‘personal responsibility’. If they don’t meet requirements, the dashboard switches to an orange warning zone and then the red penalty zone, and the messaging becomes more threatening. These zones effectively communicate: watch out, you have failed to meet a requirement and are about to lose money. This is at a time when unemployed people are already experiencing the erosion of psychological capital and self-confidence. Although designed by behavioural insights experts, there is no evidence that the symbols used in the TCF were tested on unemployed people experiencing poverty.

TCF.jpg

It is questionable whether the transfer of these behavioural economics concepts is appropriate for the context. For a start, with the use of a motor vehicle comes significant responsibility for road safety and the consequences of ‘bad’ behaviour can be fatal and devastating. In contrast, for people already experiencing poverty, a threat of more poverty is the message they get from these systems. And even when they satisfy the requirements set out in the job plan, this personal responsibility is for externally determined mutual obligation requirements that reduce self-efficacy. Furthermore, as was so powerfully reported by job seekers at the jobactive Senate Inquiry there simply are not jobs for all those who want them.

It is hard for job seekers to feel empowered when services are delivered by robots or automatic messaging systems like those that accompany the TCF. These communications typically adopt a harsh tone, and job seekers are threatened about the consequence of non-compliance and blamed for having failed to meet a requirement. It is important to remember these messages appear on the dashboard even when the payment suspension or demerit point was accrued unintentionally, or because of provider reluctance to roll back demerit points made in error.

This last point is salient because it highlights one of the most important lessons about automating austerity. The lesson is that these automations achieve administrative outsourcing so that the pain of resolving issues is now borne by job seekers and providers. The extent of this shift was not foreshadowed despite the government’s own digital transformation strategy imposing obligations on public servants to ensure appropriate stakeholder consultation. Predictably these savings have been reinvested in automation and the help lines for complaint resolution.

These examples demonstrate how algorithms automate austerity and leave those most in need behind. While the new employment services model (to be trialled for two years from 1 July 2019) is intended to redistribute resources to those most in need, the existing automations show how unintended effects arise because of the rationing imperatives driving their design. Importantly, automation shifts the cost of resolving welfare issues to individuals and outsourced organisations.

Considering recent concerns about lack of participatory policy design, there is a need to ensure future automations are designed ethically with the needs of human subjects squarely at the centre. It is also important that these systems are rigorously evaluated and that criteria other than job outcomes are used to tell if the ‘experiment’ is working. This last point reinforces the need to overhaul the assumptions of design based in policymaker construal of unemployment as a behavioural rather than a structural social problem.