Researchers Explain The Ethical Implications Of AI Decision-Making
In this new era where every almost every device we use is labelled as “intelligent”, many specialists have tried to analyse the ethical implications of intelligent machines, and how they are affecting our lives.
A recent article published on World Economic Forum website by Harvard University expert Mark Esposito and fellow researchers Terence Tse, Joshua Entsminger and Aurélie Jean, explains that cultural differences are a key element in Artificial Intelligence development.
“Over the past few years, the MIT-hosted ‘Moral Machine’ study has surveyed public preferences regarding how artificial-intelligence applications should behave in various settings”, the article states. “One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, how one thinks it should respond depends largely on where one is from, and what one knows about the pedestrians or passengers involved”, it continues.
This means that AI devices basically depend on the ethical preferences of their makers. Thus, an action that is legal in one country, could be illegal in another. “Consider the following scenario: a car from China has different factory standards than a car from the US, but is shipped to and used in the US. This Chinese-made car and a US-made car are heading for an unavoidable collision. If the Chinese car’s driver has different ethical preferences than the driver of the US car, which system should prevail?”, the researchers wonder. “A Chinese-made car, for example, might have access to social-scoring data, allowing its decision-making algorithm to incorporate additional inputs that are unavailable to US carmakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?”, they add.
“In an age of AI, some components of global value chains will end up being automated as a matter of course, at which point they will no longer be regarded as areas for firms to pursue a competitive edge”, Mr Esposito and his colleagues express. “The process for determining and adjudicating algorithmic accountability should be one such area. One way or another, decisions will be made. It is better that they be settled uniformly, and as democratically as possible”, they opine.
Draw your own conclusions…
For more information: https://www.weforum.org/agenda/2019/05/who-should-decide-how-algorithms-decide