Stuck On Algorithms

Algorithms are playing an important role in our daily lives. They tell us what to shop for, they decide if we get a loan, and coming soon how we make healthcare decisions. These could have significant implications for social work practice. Learning about concepts such as algorithms and artificial intelligence is a part of my journey of trying to get “unstuck” about technology issues. I set out to get clarity on algorithms and how social workers can gain a voice in their design. It’s important back up just a bit and define what they are…

I found this one minute video via BBC Learning to sum it up nicely…

This illustrates the need for algorithms to be clear, concise, and accurate. As algorithms, machine learning, and other forms of artificial intelligence get into more complex problems, this gets tricky. For social work practice the question is not “if” algorithms will impact our practice but “when” and “how”. This post was inspired by a medical blogger, Dr. Berci Mesko aka “The Medical Futurist”. He consistently explains how technology will effect medical care.

In a recent post he explains the medical algorithms currently approved by the Food and Drug Administration (FDA) in the United States. This provides an excellent overview of what algorithms are used for in medicine. What caught my eye is the four highlights as being relevant to psychiatry.

My enthusiasm for technology has been tempered over the last year as I learn more about algorithms and machine learning. I recently reviewed and read “Weapons of Math Destruction” about the potential faults of algorithms. That algorithms determining teacher evaluations, college rankings, and criminal justice sentencing are inherently biased. Social workers should be aware of potential biases in these systems. What I struggled to find was a way to analyze these issues in a concise way.

I began to to question concerns about medical algorithms and my twitter crew came through…

Those four algorithms for psychiatry are possible signposts. If the FDA approval is based on relative accuracy comparison by humans (example, ADHD), I have questions, but not necessarily surprised.β€” π—¦π˜π—²π—½π—΅π—²π—» π—–π˜‚π—Ίπ—Ίπ—Άπ—»π—΄π˜€, π—Ÿπ—œπ—¦π—ͺ πŸŽ™πŸ’» (@spcummings) June 18, 2019

Along with (for some of these) who gets the data, what else is data used for, is there any kind of auditing…
β€” One Ring (doorbell) to surveil them all… (@hypervisible) June 18, 2019

Hard to say w/o more detailed breakdown, but one issue is definitely the β€œusual” question: what populations were used to train the algos?
β€” One Ring (doorbell) to surveil them all… (@hypervisible) June 18, 2019

The most helpful resource I found was provided by Dr. Laura Nissen. She found the AI Blindspot by the MIT Media Lab and others

Ok that is completely fascinating and I don’t have complete answers. So far I’ve found 2 things I like that seem like promising scaffolding to decide β€œdo I like this?” Or β€œ do I not like this?” Here’s one of them… https://t.co/4ELcqsBRv3β€” Laura Nissen, PhD, LMSW (@lauranissen) June 18, 2019

They walk you through the process of potential errors in building AI and algorithms. The provide a series of cards that gives examples of each error. They provide further resources…

I found the card on “Representative Data” to best capture my initial concerns about data diversity. That in healthcare we want to be concerned about making sure that diverse data sets are available. From the social work perspective two more notions of algorithmic justice are important.

The concept of Discrimination by Proxy is a critical one. This means the algorithm may “have an adverse effect on vulnerable populations even without explicitly including protected characteristics. This often occurs when a model includes features that are correlated with these characteristics.” An example that I have heard about is algorithms that decide criminal justice sentencing. That correlated concepts such as race and socio-economic status will determine sentencing rather than other factors.

Also important to social workers should be the Right To Contest. If one of these common blindspots are found, there is a means to reconcile this. Is there enough transparency in the algorithm to fix “the problem. This is important when thinking about empowering the individuals and families we serve.

As decisions continue to be made more and more by algorithms, I found this frame work to be helpful in thinking critically about this. This provides a helpful overview of these issues and hope that it too gets you “unstuck” about algorithms.