The film “Coded Bias” is an eye opening look at the advancements in facial recognition technology and the misuse used against minorities. MIT researcher Joy Buolamwini unveils racial bias in facial recognition algorithms.

The films opens innocently enough and discusses Joy’s realization that certain gaming software and apps didn’t recognize facial features of darker skinned individuals. This led her to explore how facial recognition was being utilized in other areas. This led to her discovery of how AI was being used to assess risk in humans along with in the field of law enforcement.

Although we may not think that data can be biased, this is a dangerous assumption to make. Algorithms are built by humans and human thinking and attitudes find their way into the code. These biases may be intentional or unintended. The film revealed the lack of oversight and harm algorithms could bring to people of color. Facial recognition programs are faulty and have been used by law enforcement in an egregious way. There have been multiple cases of mistaken identity and wrongful arrests due to flawed facial recognition programs. Algorithmic bias has also been uncovered in the medical field, in the financial industry, and determining outcomes of sentencing, prison time, and parole conditions.

The film explores our right to privacy in open spaces. It also makes us think about the data that is being collected about us…and may be used against us. Even though Congress passed laws to limit misuse of this technology, I worry that the tech world will always be 10 steps ahead of the layman’s understanding of ways it can harm.

I recently downloaded an app that promised to impose my face onto a famous painting. No surprise…it wouldn’t work.