Can a computer program be racist?

Yes, no, and not really. The machine has no opinion. It also has no mind:

Software engineer Yonatan Zunger offers the example of high school student Kabir Alli’s Google image search in 2016: for “three white teenagers” and “three black teenagers.” The request for white teens turned up stock photography for sale and the request for black teens turned up local media stories about arrests: The ensuing anger over deep-seated racism submerged the fact that the algorithm was not a decision someone made. It was an artifact of what people were looking for: “When people said ‘three black teenagers’ in media with high-quality images, they were almost always talking about them as criminals, and when they talked about ‘three white teenagers,’ they were almost always advertising stock photography.’”*

Zunger cautions that, of course, “Nowadays, either search mostly turns up news stories about this event.” There is no simple way to automatically remove bias because much of it comes down to human judgment. Most Nobel Prize winners are men but a thoughtful human being will not assume that a winner “must be” a man. A machine learning system “knows” nothing other than the data input. It certainly doesn’t “know” that it might be creating prejudice or giving offense. If we want it to prevent that, we must constantly monitor its output. More.

In short, there will always be a job or a business for a person with good judgment. You can’t automate it.

* Stock photography: The“faceless face” photos that adorn racks of pamphlets on the importance of good nutrition or volunteering are sold by stock photography houses. The fact that the face probably doesn’t look very much like the kid who lives across the road is part of the package that the publisher is buying. If it is a face at all, you naturally look at it. But you aren’t supposed to see anything that distracts you from the message.

See also: Did AI teach itself to “not like” women?


Ethics for (or against) an Information Society