Automated Decision-Making in Welfare

adminSocial security rights review

Terry Carney, University of Sydney

Automation in social security is not recent.  Digitisation of Centrelink records was largely completed decades ago, including development of complex rate calculation and other assessment tools to assist decision-makers.  Like all technologies, there are benefits and risks.  Law and advocacy services adjusted to the digitisation of the 1990s, though problems continue for clients, not least the difficulty of making sense of ADEX and MultiCal debt print outs. 

Pressure on values of quality decision-making, transparency and fairness, and ethical administration of welfare have recently intensified.  This pressure is due to the increasing sophistication and complexity of automated decision-making (ADM) technologies being deployed.  Roll-out of on-line and apps-based client interfaces and compliance technologies in Centrelink is one example (Carney 2020). 

Work by the agency responsible for the National Disability Insurance Scheme (NDIS) towards developing a ‘virtual assistant’, and other more sophisticated ‘machine learning’ techniques for transforming existing data sets into systems to aid or displace human decision-making, are two other examples of these new challenges. 

Common to assessing all of these examples of automation and artificial intelligence in welfare is the impact on vulnerable clients.  That vulnerability cannot be overstated.  The pain and suffering from the abysmal failure of governance, ethics and legal rectitude in the $1.8 billion robodebt catastrophe (Whiteford 2021) of robodebt was ultimately brought to heel by judicial review and class actions.  However the much vaunted ‘new administrative law’ remedial machinery of the 1970s was seriously exposed.  Merits review failed because government ‘gamed it’ (Townsend 2021).  Other accountability mechanisms also proved toothless (Carney 2019).  So radical new thinking is called for (O’Sullivan 2021). 

Smartphone digital reporting in social security has already also proved highly problematic for vulnerable income security clients such as young single parents under ParentsNext (Carney 2020; Casey 2021).  So it is unsurprising that even more sophisticated ADM initiatives in the NDIS raised concerns for vulnerable disability clients.  Before being shelved on 9 July 2021 (at least for the time being), the NDIA proposed to replace human caseworker decision-making about NDIS access and package quantum from human evaluation decisions based in subjective assessment of applicant-provided medical reports to supposedly more objective ‘scores’ from a suite of functional incapacity ‘tools’, sought to reduce subjective inequities around access and size of packages. 

Rating scores were intended not only to underpin and improve consistency of decisions about access, but also generate one of 400 personas/presumptive budgets (Dickinson & Yates et al. 2021; Johnson 2021).  The problem here was that any roll-out along these lines would have replaced tailor-made personalised planning with imposition of one of the abstract (and ungenerous) ‘template’ plans, defeating the central rationale of the NDIS.  And like ParentsNext, it was top-down imposed rather than co-designed with users. 

This followed the abandonment of the sophisticated chatbot, called Nadia.  Nadia was supposed to take over some aspects of human to human client interaction and case management within the NDIS.  This was a machine learning cognitive computing interface.  It was designed to use ‘data mining and pattern recognition to interact with humans by means of natural language processing’ (Park & Humphry 2019: 944).  Among other features it was to have an ability to read and respond to emotions.  Commendably there was some rare co-design with end-users.  However the proponents forgot to factor in that the chatbot would need to continue to ‘learn on the job’ as it interacted with real clients, leaving them to wear the intolerable costs of Nadia’s mistakes. 

In a more extended treatment of examples like these at the online EJA Conference on 14-15 September 2021, I argue that the principal lesson to be drawn is one of failure of government administration.  It is argued that the history so far of Australian automation of welfare – most egregiously the robodebt debacle – demonstrates a complete failure of government to understand that the old-ways of policy-making are no longer appropriate.  Genuine co-design and collaborative fine-tuning of automation initiatives should be a non-negotiable imperative. 

But in light of so many botched measures, restoring trust now is critical.  It will be contended that the history of automation of welfare in Australia has not only imposed considerable harm on the vulnerable, but has destroyed that trust.  Consequently, if future automation is to retain fidelity to values of transparency, quality and user interests, it is essential that government engage creatively with the welfare community to develop the required innovative new procedures.  

References

  • Carney, T. (2019). “Robo-debt Illegality: The seven veils of failed guarantees of the rule of law?” Alternative Law Journal 44(1): 4-10.
  • Carney, T. (2020). “Artificial Intelligence in Welfare: Striking the vulnerability balance?” Monash University Law Review 46(2): Advance 1-29.
  • Casey, S. (2021). “Towards Digital Dole Parole: A review of digital self‐service initiatives in Australian employment services.” Australian Journal of Social Issues: Ahead of print  https://doi.org/10.1002/ajs4.156.
  • Dickinson, H., S. Yates, C. Smith and A. Doyle (2021) Avoiding Simple Solutions to Complex Problems: Independent Assessments are not the way to a fairer NDIS. Melbourne: Children and Young People with Disability Australia. https://apo.org.au/sites/default/files/resource-files/2021-05/apo-nid312281.pdf
  • Johnson, M. (2021). “”Citizen-centric” demolished by NDIS algorithms”. InnovationAus, https://www.innovationaus.com/citizen-centric-demolished-by-ndis-algorithms/
  • O’Sullivan, M. (2021). “Automated Decision-Making and Human Rights: The right to an effective remedy”. In: J. Boughey and K. Miller, The Automated State. Sydney: Federation Press: 70-88.
  • Park, S. and J. Humphry (2019). “Exclusion By Design: Intersections of social, digital and data exclusion.” Information, Communication & Society 22(7): 934-953.
  • Townsend, J. (2021). “Better Decisions?: Robodebt and failings of merits review”. In: J. Boughey and K. Miller, The Automated State. Sydney: Federation Press: 52-69.
  • Whiteford, P. (2021). “Debt by Design: The anatomy of a social policy fiasco – Or was it something worse?” Australian Journal of Public Administration 80(2): 340-360.
HEAR MORE AT EJA’S 2021 CONFERENCE. PROGRAM AND REGISTRATION INFORMATION AVAILABLE HERE.