Technology is becoming smarter. Our phones talk back to us, our Netflix accounts make custom movie recommendations, and soon enough, our cars will be able to drive themselves. Some of the decisions our machines make will be trivial (such as which movie to watch), but many will be more impactful on our lives. For example, self-driving cars will need to decide when to change lanes, which routes to take, or even how to avoid an accident. As users, we must determine whether we think these are the right decisions; thus, an essential element of our relationship with machines is the level of trust we place in those machines.
Human language is a strange phenomenon. Somehow, we’re able to convey complex ideas through a fuzzy communicative channel. Even disregarding the remarkable machinery involved in transforming sound waves into neural signals, how does meaning emerge from those signals? And how do we talk about abstract concepts like “Justice”, “Truth”, or even “Concepts” themselves?