Spoiler Alert: Our Automation in Social Security Project

Lucia MaiSocial security rights review

By Sarah Sacher, Economic Justice Australia

Economic Justice Australia is currently conducting a research project focused on the use of automation and automated decision-making in the social security system. Our project has been developed in partnership with Ed Santow, Director of the Human Technology Institute at the University of Technology Sydney, and Paul Henman from the Centre of Excellence for Automation in Society at the University of Queensland. It is supported by an expert Advisory Group.

The project is looking beyond Robodebt to automation in the social security system as a whole. It is focused on the implications of automation for vulnerable and disadvantaged people seeking to access and navigate the system. We will develop a research report for release in 2024, which will include evidence-based recommendations for reform across law, policy and service delivery.

The first phase of the project involved interviewing our member centres’ solicitors and caseworkers about what it is like to deal with automated systems as a vulnerable person or their legal advocate.  We conducted interviews with 22 advocates, from all states and territories, servicing metropolitan, rural, regional and remote areas. The initial findings from these interviews will form the backbone of the report. We are currently testing and developing the findings and potential recommendations through further consultation and research.

This article is a summary of what we have learned through the project so far. These findings have proven useful in pursuing our advocacy in favour of the implementation of the Robodebt Royal Commission’s recommendations, and for highlighting our broader concerns relevant to various forms of automation that impact the rights of social security recipients. 

What do we mean by ‘automation’?

We are looking at various forms of automation in the social security system. This includes automated-decision making (ADM) systems which refers to the deployment of technology to automate a decision-making process. ADM systems can be used to assist or replace the judgment of human decision-makers. Two main types include:

  • ‘Expert systems’ which use a simple rules-based formula to affirm if someone meets objective criteria.
  • ‘Machine learning’ which refers to uses of artificial intelligence where a computer learns from data (including text, images or sounds) to predict and take independent action, including making decisions, rather than being programmed to execute a decision-making process in a specified way.[1]

‘Automation’ also incorporates the digitisation of services that were previously delivered by humans.

Summary of findings

FINDING 1: People are being pushed into digital servicing regardless of their level of vulnerability.

  • Services Australia claims that digital servicing enables them to focus on providing face to face services to vulnerable people. However, interviewees report that people are being pushed into digital services regardless of their personal needs, leading to problems accessing social security – including difficulties using myGov, navigating the automated phone system, understanding automated communications and letters, and using digital forms.

Example: A homeless client attempted to obtain a paper Jobseeker application. It was so difficult to get a paper form that by the time this was found and completed, the client had lost many weeks of entitlement. Now he is registered for myGov but cannot access the website.

FINDING 2: Legal concerns with ADM systems point to continuities with Robodebt.

  • Legislation may not be applied correctly through ADM systems, and this cannot be tested due to a lack of transparency.
  • There is a lack of explanation for adverse decisions, which undermines appeal rights, and places the burden on the individual to identify and obtain relevant evidence.
  • Discretionary and other legally relevant factors are often not being considered when the original decision is made. These factors may only be assessed for the first time at the Administrative Appeals Tribunal — yet the harm of a decision occurs when the decision is made, and many vulnerable people do not appeal.
  • Individuals are being held responsible for system mistakes.

Example:A client reported that he had gotten married. His payment was automatically reduced to the couple rate, despite the fact that his wife was living overseas and they were clearly not sharing finances. There was no regard to the available discretion that would have kept him at the singles rate.

FINDING 3: Automation without human oversight results in accuracy problems and Kafkaesque situations.

  • It is often clear that no human has looked at a decision or communication, because the decision is obviously illogical or completely inaccurate. Many of the examples given by interviewees revolve around data matching.

Example: Centrelink claimed that a client was dead when he was very much alive, which affected his payment. It is unclear what data sharing prompted the decision.

Example: A woman received nine different automated letters in one day due to a data-matching error, all with incorrect information. 

FINDING 4: Automation disproportionately affects the most vulnerable.

  • Particular cohorts bear the brunt of inaccessible, inaccurate and intrusive systems – including people with disability, people in remote areas, First Nations peoples, culturally and linguistically diverse people, homeless people, victim-survivors of family and domestic violence and older people.
  • Automated systems are rigid and often fail to account for the life circumstances of vulnerable people – for example, changing home addresses and intermittent casual work.
  • People without digital skills, or access to devices and the internet, are severely disadvantaged. It is not enough to have digital skills, people also need to understand what the bureaucracy is asking of them and why – many people don’t understand this despite being able to use a computer.
  • As a point of principle, vulnerable cohorts should not be experimented on with technology when there is so much at stake.

Example: A remote community had one phone and one computer. The phone was down for a week and the computer was down for 12 months with no one who could fix it.

Example: A client had complex mental health issues. Centrelink put the wrong code in and a letter went out that said he had been cut off his payments. He said he would have killed himself if not for access to help.

FINDING 5: There are serious human rights, discrimination and privacy concerns with respect to automation.

  • There is a high risk of discrimination with respect to some ADM systems – such as risk profiling to identify fraud or non-compliance, and use of automated services that are not accessible.
  • People do not understand or give informed consent to how their personal data is used.

FINDING 6: Over-reliance on automation has led to deskilling of social security staff, centralized responses and inefficiency.

  • Interviewees point to a deterioration in Services Australia staff’s knowledge of the law, and their ability to explain a decision. Staff are also often unable to work around automated systems to factor in people’s circumstances or undo a mistake.
  • The system has become centralised around call centres, and there is no longer a localised approach. This has resulted in an impersonal, inflexible behemoth for people to navigate.
  • Cutting staff and relying on automation leads to delays in application processing and reviews. Resources are being spent on addressing preventable user mistakes, technical errors and appeals that could have been avoided with human involvement at the outset.

Example: A community legal centre used to have monthly meetings where managers from Centrelink and social workers would attend to collaboratively address issues and escalate problems. This community connectivity was lost as the system become more centralised.

FINDING 7: Automation has led to a loss of trust in government services.

  • Many people lose trust in the social security system because they find the reliance on automation to be dehumanising and stressful, and may disengage from the system and government services altogether.

FINDING 8: There are opportunities for more positive uses of automation in the social security system.

  • Automated systems are primarily focused on enforcing compliance and cutting costs. There are opportunities for building on uses of automation that enhance access, convenience and improve links between systems, and for creating new systems that centre rights and vulnerability.

As we move ahead with the project, we welcome feedback and engagement from interested people and organisations who may wish to contribute. We thank everyone who has agreed to be interviewed and who has consulted with us so far — we look forward to releasing a full report, and a set of recommendations in due course.


[1] Adapted from definition in Attorney General, Privacy Review Report (2023) 188 https://www.ag.gov./sites/default/files/2023-02/privacy-act-review-report_0.pdf