People often speak ambiguously. For example, I might say It’s cold in here to mean Please turn on the heater.
How do comprehenders make inferences about a speaker’s intended meaning? Specifically, do comprehenders adopt a speaker’s perspective when interpreting what they said––and to what extent is this process mediated by individual differences in mentalizing capacity?
Finally, can we apply what we learn about pragmatic inference in humans to build smarter, more conversant language interfaces?
Trott, S., & Bergen, B. (2017). A Theoretical Model of Indirect Request Comprehension. In Proceedings of the AAAI Fall Symposium Series on Artificial Intelligence for Human-Robot Interaction (AI-HRI). [Link]
The relationship between words and their meaning is mostly arbitrary, but there are systematic or statistical trends in form-meaning pairings. Do languages vary in their systematicity, and if so, why? Furthermore, to what extent is systematicity in a language represented in a language user’s internal model of that language?
Trust and Human-Robot Interaction
As smart technology becomes more pervasive, it’s important that we develop the right levels of trust in our machines; we should trust them to do what they can do, but we shouldn’t assume that they can do something they can’t. In particular, I’m interested in the problem of habitability: the degree to which a natural language interface’s apparent or inferred capabilities map onto its actual capabilities. Which features lead humans to make incorrect inferences about a machine’s language abilities?
Natural Language Understanding
My previous work at the International Computer Science Institute involved building a modular framework for natural language understanding––producing action from language.
Trott, S., Appriou, A., Feldman, J., & Janin, A. (2015, September). Natural language understanding and communication for multi-agent systems. In AAAI Fall Symposium (pp. 137-141). [Download link]