Ambiguous expressions pervade language. Moreover, it appears that speakers don’t always avoid speaking ambiguously. So how do we manage to communicate at all? And why are we often oblivious to the pervasiveness of this ambiguity?
Reframing the problem
One answer to these questions is to reframe the problem: perhaps, some might say, language is not truly that ambiguous. That is, many expressions (words, sounds, phrases, etc.) appear ambiguous in isolation, but are perfectly clear in context. From one perspective, this is almost trivially true. After all, people do communicate successfully, and rarely even notice the potential for ambiguity; the very existence of puns, and the observation that not all people understand them, suggests that comprehenders don’t always explicitly activate multiple interpretations of an ambiguous expression. Therefore, this argument goes, language is unambiguous. The alleged “pervasiveness” of ambiguity is an illusion, perpetuated by overeager linguists and pun enthusiasists who insist on taking things out of context.
But I think this answer misses the point somewhat. The fact that humans communicate successfully doesn’t entail that our communication signals are therefore unambiguous; it entails that comprehenders possess the capacity to routinely disambiguate signals with multiple interpretations––or “pre-activate” the most likely interpretation––given the context. There’s nothing “intrinsically disambiguating” about any given piece of contextual information. Rather, any given piece of contextual information is potentially disambiguating, only when considered by a comprehender capable of deploying that information. Arguing otherwise conflates the interpretation of a signal (e.g. the cognitive processes a comprehender engages) with the signal itself.
This becomes obvious when comparing the drastically different comprehension abilities of humans and machines. Consider the case of homophones, like bank (e.g. a river-bank vs. a financial institution). The sentence “I went to town to deposit my check at the bank” likely seems perfectly clear to most human comprehenders––but that’s only because humans understand that checks can’t be deposited in the bank of a river. Unless a machine is equipped with this knowledge, that machine will generate at least two “parses” of that sentence, with radically different implications.
And even human comprehenders vary in the contextual information they use to solve problems of ambiguity. I recently ran a series of studies with my advisor, Benjamin Bergen, investigating how people understand indirect requests, e.g. when a speaker says “It’s cold in here”, but actually means “Turn on the heater” (Trott & Bergen, 2018). Specifically, we asked whether comprehenders’ decisions about the speaker’s intentions (e.g. whether they were making a request or not) depended on what the speaker knew about the world––for example, whether or not the speaker knew the heater was broken. Overall, participants’ decisions were strongly influenced by what a speaker knew or didn’t know, but there was considerable individual variability across participants in this effect.
Some participants always interpreted the speaker’s intentions in a manner congruent with what the speaker knew: “Turn on the heater” was a request when the speaker didn’t know about the broken heater, and was not a request when the speaker did know. But many participants made incongruent responses at least some of the time, interpreting “Turn on the heater” from their own, egocentric perspective. Critically, we found that participants’ likelihood to make a congruent response was predicted by their mentalizing ability––how skilled they were at reasoning about the mental states of others (which we measured in a separate task). (See Figure 1 for a visualization of this relationship.)
Towards a research program
Both of the examples above illustrate a basic point: the fact that contextual information (the meaning of the rest of a sentence; a speaker’s knowledge states) could disambiguate the meaning of a signal does not imply that:
- The signal is unambiguous.
- Comprehenders actually use that contextual information to disambiguate the signal.
Understanding this allows to make an important distinction, which I believe is a helpful framing for research on ambiguity in general: 1) Information that could be used, in principle, to disambiguate the meaning of a signal; and 2) Information that is used by comprehenders. Almost certainly, there exist “contexts”––distributional patterns, low-level perceptual cues, discourse histories, interactional settings, etc.––that reliably predict the correct meaning of an ambiguous signal, but which aren’t actually used by comprehenders, or which are only used by some comprehenders some of the time.
Thus, a suitable research program would involve the use of multiple methodologies to first identify potential sources of disambiguating information, then ask whether humans seem to use that information. (And, optionally, ask whether / how easily this information could be integrated into models of machine language comprehension.) This program is compatible with, and in fact encourages, the recognition that comprehenders likely vary in the strategies they employ to resolve a given ambiguity––indeed, that even the same comprehender might recruit different resources across different situations. Understanding when and how comprehenders vary grants us additional insight into the fundamental problem of ambiguity in language.
Ultimately, the goal of this research program is to understand, as best we can, how humans manage to communicate so successfully over such a fuzzy communication channel, which in turn informs our understanding of why ambiguity exists in the first place.
Trott, S., & Bergen, B. (2018). Individual Differences in Mentalizing Capacity Predict Indirect Request Comprehension. Discourse Processes.