News


Latest Faculty of Information News

Life-changing algorithms

Submitted on Friday, March 25, 2022

Professor Shion Guha scrutinizes the growing use of algorithms in the public sector, including in the child welfare and criminal justice systems 

For many administrators in the public sector, algorithmic decision-making is seen not only as a fair and consistent practice but also as a way to increase efficiency and reduce costs. In his research, Assistant Professor Shion Guha has questioned these premises, and come to the conclusion that, rather than relying on risk-based assessments made by algorithmic software programs, what is needed is a system that uses what Guha refers to as “strength-based assessments.” 

This is especially true given the high-stakes nature of many of the decisions made regularly in child welfare situations. Algorithms are used, for example, to assess the risk of child abuse and sex trafficking as well as to determine whether children should be placed in foster homes and, if so, for how long. 

Despite the importance of the outcomes determined by algorithms, “there’s very little cognition of the biases that might creep in,” says Guha. “In criminal justice, there’s more cognition. Law enforcement and the judiciary system do talk about algorithmic biases. In child welfare, we don’t talk about these biases too much.” 

Guha attributes this at least partially to the fact that in law and criminal justice, much of the information about proceedings and decisions is public whereas this is often not the case in child welfare due to the need for privacy protection. 

While Guha’s research shows that the algorithms’ ability to predict risk can vary, another major problem is that they often don’t take into account the stated objectives of different government agencies. For children, that is better outcomes, he says, and, in the criminal justice system, the goal is rehabilitation. 

What’s more, an over-reliance on algorithmic decision-making can also be detrimental to human discretion, according to Guha’s recent research work, which has highlighted the lack of human-centeredness in the design and implementation of the algorithms used in the daily work practices of child-welfare caseworkers. 

Shion Guha’s book Human-Centered Data Science

As a co-author of the new textbook, Human-Centered Data Science, Guha says, “We have to think about what algorithms really do. Historical or administrative data usually goes in. Some complicated math happens and an outcome is predicted. There can be biases in all three components.” 

Published by MIT Press written with four other founders of the field, Human-Centered Data Science introduces best practices for addressing the bias and inequality that may result from the automated collection, analysis, and distribution of very large datasets. It offers a brief and accessible overview of many common statistical and algorithmic data science techniques, explains human-centred approaches to data science problems, and presents practical guidelines and real-world case studies to help readers apply these methods. 

In a recent paper, Guha and his fellow researchers highlighted the need to look closely at how algorithms impact the daily work practices of child-welfare caseworkers. They developed a “framework of algorithmic decision-making that reflects the complex socio-technical interactions between human discretion, bureaucratic processes, and algorithmic decision-making by synthesizing disparate bodies of work in the fields of Human-Computer Interaction, Science and Technology Studies, and Public Administration.” 

They then applied this framework to qualitatively analyze an in-depth, eight-month ethnographic case study of algorithms in daily use within a child-welfare agency that serves approximately 900 families and 1,300 children in the mid-western United States. The case study was carried out by Guha and Devansh Saxena, one of the paper’s co-authors and Guha’s PhD student at Marquette University in Milwaukee, where he taught before coming to the University of Toronto. 

“We attended 55 agency meetings and conducted 20 individual interviews over the course of eight months, which resulted in daily interactions with approximately 120 agency employees and external consultants,” the authors write in their resulting paper. 

Published in 2021 and entitled “A Framework of High-Stakes Algorithmic Decision-Making for the Public Sector Developed through a Case Study of Child-Welfare,” the paper shows that without a proper framework in place, decisions are often mandated by algorithm and case workers have no flexibility. “I think the most important piece is the idea of discretion and agency employees choosing what to apply on a case-by-case basis,” says Guha. “Bureaucrats need leeway to exercise freedom in deciding what kind of decisions to make.” 

“While most governments see algorithms as a way to increase efficiency and reduce costs, whether or not it actually does that is debatable,” says Guha. When street-level case workers are locked into decisions by an algorithm, requiring steps that experienced employees might deem unnecessary or even harmful, their valuable time can be wasted. One such example was the mandated use of an anti-sex trafficking algorithm in the agency under study. 

“It’s become overused and is being abused. It’s being used in different settings and not how it was originally intended to be used. We are getting an influx of calls that don’t need to be called,” said one child welfare supervisor cited in the study. Guha would like to see critical interpretive approaches used in conjunction with computational methods.  

What’s more, he says, child welfare agencies that haven’t yet implemented algorithms need to incorporate non-tech experts into the technology acquisition process to ensure they are acquiring are the right kind of systems In the framework study, for example, case workers often praised 7ei (Seven Essential Ingredients), an algorithm that emphasizes so-called trauma informed care, while they were consistently negative about the more commonly used CANS (Child and Adolescent Needs and Strengths) algorithms, which they found overly restrictive. 

Guha is also an advocate for more sophisticated algorithmic or AI decision making processes that are able to incorporate not just numbers but also the reams of historical narratives social workers have written about their cases over decades and which he describes as a vast source of untapped knowledge. “There are a lot of different areas of research for this and lots of great prospective students who want to work on this shift away from negative risk assessment to positive strength assessment.” 

“I’m not against algorithm implementation,” Guha says. “I just think the current philosophy is wrong and there are opportunities to learn. There’s a reason the prime minister doesn’t make algorithmic decisions.” 

Read Shion Guha’s most recent paper about actual human centered algorithmic implementation in child welfare: Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare through Casenote Analysis