Mistaking a teapot shape for a golf ball, due to surface features, is one striking example from a recent open-access paper:
The networks did “a poor job of identifying such items as a butterfly, an airplane and a banana,” according to the researchers. The explanation they propose is that “Humans see the entire object, while the artificial intelligence networks identify fragments of the object.” News, “Researchers: Deep Learning vision is very different from human vision” at Mind Matters
You really want this thing driving your car? Seriously, machine learning will get better but if a lot of what we hear sounds like hype, that’s because it is.
See also: Guess what? You already own a self-driving car Tech hype hits the stratosphere Yes, the car you own today is probably “self-driving” and you may not know it. But that is because of the creative ways the term can be defined.