The real problem of bias?

Many worry AI systems display biases that can potentially amplify and perpetuate inequalities. Some biases could be dependent on the limited datasets used to train AI. For example, image recognition displays higher accuracy on white faces than on minorities due to smaller datasets. But other biases in the system may be a reflection of pre-existing societal biases.

Attempts to eliminate these biases could have unintended consequences. In the pursuit of political correctness and equality, we risk overcorrecting and suppressing truth that might be considered uncomfortable and prejudiced. Nevertheless, the inability to know the truth could stop us from developing solutions that could implement more equal realities.

Generating fabricated “neutral” data could also backfire dangerously, contributing to the spread of misinformation and errors with consequences that might negatively surpass the inequalities we were trying to control.

Data is inherently influenced by human experiences and societal contexts. Determining what is neutral data will require value judgments that inadvertently introduce new bias into the system.

As long as our data is human, our data will never be unbiased. And as long as we are human, we will always find it easier to see the bias in the judgment of others before our own personal bias.