Over-intelligent AI Machines ‘Can Make Things Worse’, researcher says
“Why asking an AI to explain itself can make things worse” is the title of a recent article published on MIT Review by researcher and writer Douglas Heaven, in which he explains why more transparent and intelligent AI machines could “lead us to over-trust them”.
“People are primed to trust computers. It’s not a new phenomenon”, Mr Heaven asserted. “When it comes to automated systems from aircraft autopilots to spell checkers, studies have shown that humans often accept the choices they make even when they are obviously wrong. But when this happens with tools designed to help us avoid this very phenomenon, we have an even bigger problem”, he explained.
However, for some researchers, the solution to this could be in the way AI explains itself. “It is easier to understand what an automated system is doing—and see when it is making a mistake—if it gives reasons for its actions the way a human would”, the author of the report states. “For one thing, it is not clear whether a machine-learning system will always be able to provide a natural-language rationale for its actions. Take DeepMind’s board-game-playing AI AlphaZero. One of the most striking features of the software is its ability to make winning moves that most human players would not think to try at that point in a game. If AlphaZero were able to explain its moves, would they always make sense?”, he wondered.
“Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social-media feeds—and anyone sitting in the backseat of a self-driving car”, Mr Heaven expressed. “Explanations that anyone can understand should help pop that bubble”, he affirmed.
Draw your own conclusions…