There are companies like Lockheed Martin making autonomous killing robots, and there are companies like Google making self-driving cars (which kill people by accident or poor design). At least cars don’t tend to kill on purpose, and the Google self-driving car hasn’t had a deadly accident (or one it caused, of any kind). So, what’s worse? Intentionally creating machines that can destroy humans, or accidentally doing it? Let’s aim at neither.
Many people have seen the Sci-Fi movie Terminator and Terminator 2. They were made before the WWW, and before Skynet seemed like a possibility. Now we have 3D printers, we have walking and flying robots who can shoot, and we have a global intelligence network those machines connect directly to. We need to be very cautious in Artificial Intelligence development over the coming years, or a small group of people could make a mistake that could cost millions (billions?) of lives.