Through a glass darkly: why we shouldn’t put too much trust in AI
Devoid of emotions, machines impartially serve humanity, don’t they? Regrettably, as products of human creation, computers reflect our ingenuity and imperfections all the same.
Envision a scenario where decision-making is entrusted to machines, and robotic entities are tasked with determining matters like granting loans and diagnosing diseases. The promise lies in the potential to eradicate biases. Yet, the reality paints a different picture, where these non-sentient machines pass judgments with a severity reminiscent of high school days.
Unfortunately, this is not some sci-fi movie premise – it’s the world we live in. AI’s potential to boost productivity and profit leads to rushed business decisions with little to no regard as to how these machines enforce the biases we’re already drowning in.
Welcome back to another installment of our bi-weekly podcast series, Through a Glass Darkly. In this episode, we delve into the realm of AI bias. How is it possible that machines, driven solely by data and algorithms, exhibit biases and prejudices akin to our own? Join us for an insightful 52-minute exploration where we tackle questions such as:
- Who creates AI systems and for whom
- Who’s training AI systems
- The Stone Age of AI systems
- Would we want an AI system to decide whether we should get a loan?
- The rushed adoption of AI to boost profits and productivity
- How AI reinforces stereotypes that are already embedded in society
- Why current AI systems are biased towards white people over people of color
- AI systems simply mirror societal biases. Can we do something about it?
We don’t think that a sentient AI application exists – meaning that none of the software you interact with, no matter how “smart,” doesn’t actually have a mind of its own. This fact has both pros and cons. The pros – well, it won’t take over the world in the near future. And the cons? It’s being built by people, feeding it written testaments of our history, which, we know for a fact, is nothing more than just a fairy tale only seasoned with facts and written by the winners.
Well, that might be an overstatement, but I wanted to highlight that we, people, are the creating what could one day be perceived as a whole new species – robots. We craft them in accordance with our perception of equity and morality, and yet often think those machines are devoid of bias. It’s crucial to recognize that certain preconceptions and biases are somewhat entrenched in our cultural fabric, therefore might manifest itself through our creations, including machines.
Consider, for instance, a seemingly innocent question posed to a kid: “Your braids are so beautiful. Was it your mother who did your hair?” Seemingly benign, it’s actually full of stereotypes, even if we’re not aware of them when posing that question.
We infuse our perspectives into the tasks we undertake quite naturally and often without conscious awareness. How does this phenomenon unfold in the realm of AI?
- AI is being trained on data which is “dirty” and doesn’t really represent the world. Cars’ AI systems designed to recognise a pedestrian have a hard time recognizing people of color and children because it has probably been trained on datasets where these groups of people were underrepresented.
- Many machine learning models actually learn by consuming the people’s input and swallowing all the information on the internet. It’s like if I would give my kid a plate of smelly garbage for lunch and then complain about her nasty breath.
- AI is trained by people. Job postings like AI trainers are becoming increasingly popular, highlighting that people are behind AI systems.
- Data is being cleaned by people, too. For example, data labelers are asked to label pictures for systems so they can learn to recognize objects. And guess what, people can be very mean, labeling overweight people as losers, and introducing biases into a machine.
- Decide whether you’re eligible for a loan
- Read your medical history, interpret symptoms, give you a diagnosis, and offer a treatment
- Become your psychiatrist, judge your academic paper, and God (or machine?) knows what else