The eye of the beholder

The idea that artificial intelligence will help us prepare for the world of tomorrow is woven into our collective imaginations. Based on what we’ve seen so far, however, AI is far more capable of reproducing the past than predicting the future.

This is because AI algorithms are trained on data. By its very nature, data is an artifact of something that happened in the past. Did you turn left or right? Did you go up or down stairs? Was your coat red or blue? Did you pay the electricity bill on time or late?

Data is a relic, even if it’s only a few milliseconds old. And it’s safe to say that most AI algorithms are built on data sets that are significantly older. In addition to vintage and accuracy, you should consider other factors such as who collected the data, where the data was collected, and whether the data set is complete or missing.

A perfect database does not exist, at best it is a distorted and incomplete reflection of reality. When we decide which data to use and which data to ignore, we are influenced by our inherent biases and preexisting beliefs.

“Assume your data is a perfect reflection of the world. It’s still problematic because the world itself is biased, right? So now you have a perfect picture of a distorted world,” says Yulia Stojanovic, associate professor of computer science and engineering at NYU Tandon and director of NYU’s Center for Responsible AI.

Can artificial intelligence help us reduce the biases and prejudices that creep into our databases, or will it just increase them? And who is to decide which biases are tolerable and which are truly dangerous? How are bias and fairness related? Does every biased decision produce an unfair result? Or is the relationship more complicated?

Today’s conversations about AI bias tend to focus on highly visible social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of well-known biases (such as confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion, externality bias, survivorship bias, omitted variable bias, and many, many others). Jeff Desjardins, founder and editor-in-chief of Visual Capitalist, has published a fascinating infographic outlining 188 cognitive biases, and these are the ones we know about.

Ana Chubinidze, founder of AdalanAI, a Berlin-based AI management startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” mainly refers to human bias embedded in historical data. “Things will get more complicated when AIs start creating their own biases,” he says.

He predicts that AIs will find relationships in data and assume they are causal, even if those relationships don’t actually exist. Imagine, he says, an AI-powered edtech system that asks students increasingly difficult questions based on their ability to correctly answer previous questions. AI will quickly develop biases about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety.

However, edtech AI’s “smart” students will get difficult questions and the rest will get easier, leading to uneven learning outcomes that may not be noticed until late in the semester, or at all. Worse yet, AI bias will likely find its way into the system’s database and follow students from one class to the next.

While the edtech example is hypothetical, there have been enough real-world cases of AI bias to raise alarm. In 2018, Reuters reported that Amazon had ditched an AI recruiting tool that had developed a bias against female applicants. In 2016, Microsoft’s Tay chatbot was disabled after racist and sexist comments.

Maybe I’ve been watching too many episodes of The Twilight Zone and Black Mirror because I find it hard to see this ending well. If you have any doubt about the virtually inexhaustible power of our bias, please read on Thinking, fast and slow By Nobel Laureate Daniel Kahneman. To illustrate our susceptibility to bias, Kahneman asks us to imagine a bat and a baseball selling for $1.10. The bat, he says, is a dollar more expensive than the ball. How much is the ball worth?

As humans, we tend to favor simple solutions. It’s a bias we all share. As a result, most people will intuitively jump to the easiest answer, that a bat costs a dollar and a ball costs a cent, even though that answer is wrong, and just a few minutes of thought will reveal the correct answer. I actually went looking for a pen and paper so I could write out the algebra equation; something I haven’t done since ninth grade.

Our biases are pervasive and ubiquitous. The more granular our data sets become, the more they will reflect our ingrained biases. The problem is that we use those biased data sets to train artificial intelligence algorithms, and then use the algorithms to make decisions about hiring, college admissions, financial creditworthiness, and the allocation of public safety resources.

We also use artificial intelligence algorithms to optimize supply chains, detect diseases, accelerate the development of life-saving drugs, find new energy sources and search for illicit nuclear materials around the world. As we apply AI more broadly and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is entwined with the idea of ​​unfairness. Just because a solution to a particular problem seems “unbiased” doesn’t mean it’s fair, and vice versa.

“There really is no mathematical definition of fairness,” says Stojanovic. “What we talk about in general may or may not apply in practice. Any definition of bias and fairness must be grounded in a certain domain. You should ask. “Who is affected by AI? What are the damages and who is affected? What are the benefits and who benefits?’

The current wave of hype around AI, including the ongoing hype around ChatGPT, has created unrealistic expectations about AI’s strengths and capabilities. “Senior decision makers are often shocked to learn that AI will fail at trivial tasks,” said Angela Sheffield, an expert on the use of AI for nuclear nonproliferation and national security. “Things that are easy for humans are often really hard for artificial intelligence.”

Aside from lacking basic common sense, Sheffield points out, AI is not inherently neutral. The idea that artificial intelligence will become fair, neutral, helpful, useful, beneficial, responsible, and aligned with human values ​​if we simply eliminate bias is wishful thinking. “The goal is not to create a neutral AI. The goal is to create customizable AI,” he says. “Instead of making assumptions, we need to find ways to measure and correct for bias. If we don’t deal with bias when building AI, it will affect performance in ways we can’t predict.” If a biased database makes it more difficult to reduce the spread of nuclear weapons, then that’s a problem.

Gregor Stüler is the co-founder and CEO of Scoutbee, a company based in Würzburg, Germany that specializes in AI-powered procurement technology. From his perspective, biased data sets make it difficult for AI tools to help companies find good sourcing partners. “Let’s take a scenario where a company wants to buy 100,000 tons of bleach, and they’re looking for the best supplier,” he says. Supplier data can be biased in many ways, and AI-assisted search is likely to reflect bias or inaccuracies in the supplier database. In a bleaching scenario, this could result in a nearby supplier being outsourced to a larger or better-known supplier in another continent.

From my perspective, these kinds of examples support the idea of ​​managing AI bias issues at the domain level, rather than trying to develop a universal or comprehensive top-down solution. But is that too simplistic an approach?

For decades, the tech industry has raised complex moral questions by invoking a utilitarian philosophy that holds that we should strive to create the greatest good for the greatest number of people. In The Wrath of KhanMr. Spock says: “The needs of the many prevail over the needs of the few.” It’s a simple statement that expresses the utilitarian ethos. However, with all due respect to Mr. Spock, it does not take into account that circumstances change over time. Something that seemed wonderful to everyone yesterday may not seem so wonderful tomorrow.

Our current infatuation with artificial intelligence may pass, just as our love of fossil fuels has been tempered by our concerns about climate change. Perhaps the best course of action is to assume that all AI is biased and that we can’t just use it without considering the consequences.

“When we think about building an AI tool, we first have to ask ourselves, is the tool really needed here, or should a human do it, especially if we want the AI ​​tool to predict whether what does a social outcome mean,” says Stojanovic. “We have to think about the risks and how much someone will be hurt when AI goes wrong.”


Author’s note: Yulia Stoyanovich is a co-author A five-volume comic about AI which can be downloaded for free from GitHub.

Source link