Baylor computer engineering prof Robert J. Marks asks, how will bans help if the enemy doesn’t ban them? But here are some realistic problems:
The more complicated a system becomes, the more difficult it is to analyze all of its actions. There are plenty of roads on which to test and tune the self-driving cars, but there are not a lot of wars available in which to test and tune autonomous AI weapons. If we seek military superiority to deter aggression, imaginative and creative minds are needed to assess all possibilities.
Unanticipated consequences will always be a problem for totally autonomous AI. In the development of technology overall, there is always a tradeoff in which human life is given a price. For example, cheap cars aren’t safe and safe cars aren’t cheap. Cars can be made very safe indeed if you don’t mind that the poor can’t afford to drive.
Why we can’t just ban killer robots: Autonomous AI weapons are potentially within the reach of terrorists, madmen, and hostile regimes like Iran and North Korea. As with nuclear warheads, we need autonomous AI to counteract possible enemy deployment while avoiding its use ourselves.
See also: Jay Richards: The way the media cover AI, you’d swear they had invented being hopelessly naïve