Should we ban killer robots and let hostile powers develop them?

That’s the real decision, according to computer science prof Robert J. Marks at Mind Matters:

My position counters that of over a thousand AI experts who put their X on a letter demanding a ban of all autonomous weapons in 2015. Celebrity signatories included the late Stephen Hawking, Elon Musk, Apple’s Steve Wozniak, Noam Chomsky, and Skype co-founder Jaan Tallinn.

These luminaries are looking at their feet rather than over the landscape of behavioral and historical reality. …

The problem is, constructing offensive autonomous AI weapons is a lot easier that developing the atomic bomb was. Autonomous AI weapons are potentially within the reach of terrorists, madmen, and hostile regimes like Iran and North Korea. As with nuclear warheads, we need autonomous AI to counteract possible enemy deployment while avoiding its use ourselves. More.

Reality check: Why do people think that North America has never been nuked? Because the world’s dictators would not go that far morally?

See also: Top Ten AI hypes of 2018 The problem with believing the hype and nonsense about AI, Marks warns, is that we won’t know a real threat when we see one.