Headline News 613 Views JD91

Scientists Warn About The Risks of Artificial Intelligence's Ability To Read Emotions



Artificial Intelligence (AI) has become essential part of our lives in the last decade. From smartphones to cars, autonomous devices are intrinsically connected to our jobs, our leisure time or even our sleep. However, this growing control of our lives seem to have been taken too far: AI can now read human emotions. Should it be allowed to do so?

A very interesting article published on tech news website Technically Media states that researchers from the AI Now Institute warned that Artificial Intelligence “should not be used in decisions that ‘impact people’s lives and access to opportunities,’ such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions”.

The core of this situation is apparently related to the accuracy of AI readings, and the ethical implications of the results of those readings. “These types of systems almost always have fairness, accountability, transparency and ethical (‘FATE’) flaws baked into their pattern-matching”, the article explains. “For example, one study found that facial recognition algorithms rated faces of black people as angrier than white faces, even when they were smiling”, it adds.

“Issues regarding FATE in AI will require a continued and concerted effort on the part of those using the technology to be aware of these issues and to address them”, the report asserts. “Greater accuracy and ease in persistent monitoring bring along other concerns beyond ethics. […] With these ethical and privacy concerns, a natural reaction might be to call for a ban on these techniques. Certainly, applying AI to job interview results or criminal sentencing procedures seems dangerous if the systems are learning biases or are otherwise unreliable”, it continues.

“In light of this, we as a society need to carefully consider these systems’ fairness, accountability, transparency and ethics both during design and application, always keeping a human as the final decision-maker”, it emphasises.

Draw your own conclusions…

For more information: https://technical.ly/2020/01/13/artificial-intelligence-ai-can-now-read-emotions-should-it-ethics-machine-learning-hiring/


Comments

There are 0 comments on this post

Leave A Comment