Microsoft, Amazon & Intel may destroy humanity with Artificial Intelligence
Microsoft, Amazon, and Intel are among the leading tech companies putting the entire world at risk through the development of a killer robot, says a survey that examined the key players from the techindustry about their stance on lethal autonomous weapons.
The Dutch NGO, Pax, who carried out the study questioned why so many high tech companies are working on developing military related software that could lead to thedeath of humans. The NGO ranked 50 companies by three criteria: whether they were developing technology that supports deadly AI, whether they are working on military projects, and if they will be abstaining from such projects in the future.
"Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?" asked Frank Slijper, lead author of the shocking report.
“Is this the beginning of a Terminator movie? asked Vicki Batts, writing for Natural News, “the NSA does already have something called “Skynet,” which is a surveillance program for suspected terrorists. But the development of A.I. weapon systems designed to independently select and attack targets is another ballgame.”
The advancement of A.I. and robotics could be the beginning of a whole new era — one in which robots could decide to annihilate the human race in the twinkle of an eye.
The survey found that twenty two firms were of “medium concern,”while twenty one fell into “high concern” category.
The idea of employing AI in the military has sparked ethical debates in recent times, critics warn that it has the potential of putting international security in jeopardy and to usher in a fresh revolution in warfare after gunpowder and the atomic bomb
"Autonomous weapons will inevitably become scalable weapons of mass destruction, because if the human is not in the loop, then a single person can launch a million weapons or a hundred million weapons," Stuart Russell, a computer science professor at the University of California, quoted on Natural News.
"The fact is that autonomous weapons are going to be developed by corporations, and in terms of a campaign to prevent autonomous weapons from becoming widespread, they can play a very big role," he added.
Russell also has this to say about the future “More worrying still are new categories of autonomous weapons that don’t yet exist — these could include armed mini-drones like those featured in the 2017 short film ‘slaughterhouse'.”
“With that type of weapon, you could send a million of them in a container or cargo aircraft — so they have destructive capacity of a nuclear bomb but leave all the buildings behind,” said Russell.
The European Union (EU) published guidelines that companies and governments should adhere to developing AI, this including the need for human oversight, working towards societal and environmental wellbeing in a non-discriminatory way, and respecting privacy.
Russell argued it is important that governments issue an international ban on lethal AI, a law which would clearly stipulate that "machines that can decide to kill humans shall not be developed, deployed, or used."