Teaching AI To Understand Human Desires Could Be A Solution To Many Problems, Researcher Claims
With the recent developments in Artificial Intelligence, humans have reached higher level of sophistication. However, some experts warn about the dangers of giving unspecific commands to intelligent machines, which could cause catastrophic consequences.
In a recent article published on tech online magazine QuantaMagazine, researcher Natalie Wolchover explains that “the danger of having artificially intelligent machines do our bidding is that we might not be careful enough about what we wish for”.
“The lines of code that animate these machines will inevitably lack nuance, forget to spell out caveats, and end up giving AI systems goals and incentives that don’t align with our true preferences”, she affirms. “A major aspect of the problem is that humans often don’t know what goals to give our AI systems, because we don’t know what we really want”, she expresses.
However, some scientists have suggested that the solution is programming intelligent machines to understand human preferences instead of achieving a specific task. “Instead of machines pursuing goals of their own, the new thinking goes, they should seek to satisfy human preferences; their only goal should be to learn more about what our preferences are”, Ms Wolchover says. “With standard inverse reinforcement learning, a machine tries to learn a reward function that a human is pursuing. But in real life, we might be willing to actively help it learn about us”, she adds.
“Like the robots, we’re also trying to figure out our preferences, both what they are and what we want them to be, and how to handle the ambiguities and contradictions”, the journalist writes. “Like the best possible AI, we’re also striving — at least some of us, some of the time — to understand the form of the good, as Plato called the object of knowledge. Like us, AI systems may be stuck forever asking questions — or waiting in the off position, too uncertain to help”, she mentions.
Draw your own conclusions…