Popular culture often depicts intelligent machines as coldly rational––capable of making “objective” decisions that humans can’t. More recently, however, there’s been increased attention to the presence of bias in supposedly objective systems, from image recognition to models of human language. Often, these biases instantiate actual human prejudices, as described in Cathy O’Neill’s Weapons of Math … Continue reading What we talk about when we talk about bias in A.I.
Bias is real – and often harmful. It’s been shown to manifest in hiring decisions, in the training of machine learning algorithms, and most recently, in language itself. Three computer scientists analyzed the co-occurrence patterns of words in naturally-occurring texts (obtained from Google News), and found that these patterns seem to reflect implicit human biases. … Continue reading Biased language