What we talk about when we talk about bias in A.I.

Popular culture often depicts intelligent machines as coldly rational––capable of making “objective” decisions that humans can’t. More recently, however, there’s been increased attention to the presence of bias in supposedly objective systems, from image recognition to models of human language. Often, these biases instantiate actual human prejudices, as described in Cathy O’Neill’s Weapons of Math Destruction; for example, statistical models engineered to predict recidivism rates include information that would never be allowed in a courtroom, and perpetuate cross-generational cycles of incarceration.

Continue reading