UFO · Exopolitics 1666 Views JD91

Artificial Intelligence algorithms are biased says researcher



Many researchers have warned about the dangers of overtrusting Artificial Intelligence. They claim that even if algorithms can make easier the decision-making process, they don’t necessarily make it fair or good for humans.

In a recent article published on online news site Vox, tech expert Rebeca Heilweil explains that Artificial Intelligence-based systems “can be biased based on who builds them, how they’re developed, and how they’re ultimately used”.

“This is commonly known as algorithmic bias”, she asserts. “It’s tough to figure out exactly how systems might be susceptible to algorithmic bias, especially since this technology often operates in a corporate black box. We frequently don’t know how a particular artificial intelligence or algorithm was designed, what data helped build it, or how it works”, she adds.

And that is the core of the issue: data. These systems are fed with tons of data, but sometimes it is not properly classified, which leads to biases or mistakes. “Often, the data on which many of these decision-making systems are trained or checked are often not complete, balanced, or selected appropriately, and that can be a major source of — although certainly not the only source — of algorithmic bias”, the expert affirms.

“So you might have the data to build an algorithm. But who designs it, and who decides how it’s deployed? Who gets to decide what level of accuracy and inaccuracy for different groups is acceptable? Who gets to decide which applications of AI are ethical and which aren’t?” she wonders. “Just because a technology is accurate doesn’t make it fair or ethical. For instance, the Chinese government has used artificial intelligence to track and racially profile its largely Muslim Uighur minority, about 1 million of whom are believed to be living in internment camps”, she continues.

“There’s no guarantee companies building or using this tech will make sure it’s not discriminatory, especially without a legal mandate to do so. It would seem it’s up to us, collectively, to push the government to rein in the tech and to make sure it helps us more than it might already be harming us”, Ms Heilweil expressed.

Draw your own conclusions…

For more information: https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency


Comments

There are 0 comments on this post

Leave A Comment