Algorithmic Bias

Timothy Ferrell
3 min readJul 25, 2021

This week we looked at the ways algorithms aren’t inherently objective, but rather can lend the guise of objectivity to the presentation of inherently biased data.

Some of the best/ most convincing examples of this were examples of algorithms that don’t take the “moneyball “ approach, in trying to find undiscovered or undervalued talent, but rather attempt to reach conclusions that duplicate those that are already reached.

It was once again a frustrating segment for me. While there were good examples of bad algorithms, and good examples of the negative effect an algorithm can have, I wasn’t at all convinced that the use of an automated system in a good faith effort to achieve objectivity (and failing) should ever be considered morally inferior to not attempting to achieve objectivity at all.

We heard of a few dystopian nightmares, which is inevitable when real-world decisions have to be made based on predictions of future crime. No one familiar with Phillip K. Dick’s Minority Report would ever imagine that predictions about individual actions in the future combined with the authority of the state would lead to a perfect society free of crime. The unfortunate reality we have, though, is that predictions about the likely future actions of individuals ARE the basis by which many decisions in our justice system not only are made, but must be made. How do we KNOW if someone is dangerous or harmless? Likely to re-offend, or likely to have been scared straight?

Currently, these judgments are just that, judgements of individuals. They may be ridden with individual biases, colored by the ability of a defendant to “present himself” well, or simply fall to the capricious whims or predilections of a single individual at any given point in time. By instead using an algorithm, ANY consistently applied algorithm, these influences are at WORST greatly ameliorated. We saw evidence that a specific algorithm, COMPAS, falsely rated whites as less likely to offend, and falsely rated blacks as more likely to offend. Is this really the best too we have to predict future arrests?

In Technically Wrong (Wachter-Boettcher, 2017) we find some evidence that we don’t have an algorithm that just hates black and brown people, but rather one that is concerned almost exclusively with correctly guessing how likely it is that a given person is likely to commit, in the words of the publisher, Northpointe, “a finger-printable arrest involving a charge and a filing for any uniform crime reporting (UCR) code.”

We look into the numbers and find that the rate of correctly predicting someone with a given score reoffending doesn’t display racial disparity, but, because of the nature of the reality we live in, with one group arrested at 5 times the rate of whites, it is literally impossible to both achieve parity in accuracy (which was achieved) and parity of the rate of false positives.

Many of my peers seem to have taken a surface look at this week’s discussion question, and implicitly assumed the use of the force of the state was a possible, effective, and preferable way to make it so that algorithms “do no harm”. I take a dimmer view of coercive measures, or the ability of the government to legislate technology. I understand that the question is not one of unbiased automate algorithms vs biased automated algorithms a little more quickly, but between the existing systems based on human judgement and new systems embodying the sincere efforts of their designers to make accurate predictions about the future.

Every technology has flaws, but the use of predictive technology is especially likely to improve, because we have a data set on which to make objective observations and testable predictions. Just like Netflix doesn’t always predict a movie I will like, no COMPAS score or analogous system will always make the right claims about the future. Short of total knowledge of a deterministic system, no one ever will.

As long as we are making assessments about risk, and the likely future outcomes of individuals, we should have strong incentives to make our predictions as correct as possible. Where the cost of those predictions is borne by those making predictions, absolutely no interventions is necessary or justifiable. Where the cost of those predictions is borne by individual accused or the public at large, we have a duty to make those predictions transparent, and as objective and accurate as possible. Unless and until a better system can be devised and implemented, we have a responsibility to use what we have.

--

--

Timothy Ferrell
0 Followers

I am a man. My gender is not ambiguous.