Research Projects

Pragmatic inference

People often speak ambiguously. For example, I might say It’s cold in here to mean Please turn on the heater.

How do comprehenders make inferences about a speaker’s intended meaning? Specifically, do comprehenders adopt a speaker’s perspective when interpreting what they said––and to what extent is this process mediated by individual differences in mentalizing capacity?

Finally, can we apply what we learn about pragmatic inference in humans to build smarter, more conversant language interfaces?

Relevant papers: 

Trott, S., & Bergen, B. (2018). Individual Differences in Mentalizing Capacity Predict Indirect Request Comprehension. Discourse Processes. [Link] [Experimental materials][Link to PDF]

Trott, S., & Bergen, B. (2017). A Theoretical Model of Indirect Request Comprehension. In Proceedings of the AAAI Fall Symposium Series on Artificial Intelligence for Human-Robot Interaction (AI-HRI). [Link]

(Non-)Arbitrariness in Language

The morpheme is generally considered to be the basic unit of meaning in a language, while phonemes are thought to be arbitrary.

However, there is evidence for sub-morphemic systematicity in form-meaning pairings, such as phonaesthemes (e.g. the onset gl–). Which linguistic and non-linguistic factors affect the evolution of systematicity in a language, and how does the presence of non-arbitrariness promote (or hinder) learning and memory?

Relevant papers and resources:


Trust and Human-Robot Interaction

As smart technology becomes more pervasive, it’s important that we develop the right levels of trust in our machines; we should trust them to do what they can do, but we shouldn’t assume that they can do something they can’t. In particular, I’m interested in the problem of habitability: the degree to which a natural language interface’s apparent or inferred capabilities map onto its actual capabilities. Which features lead humans to make incorrect inferences about a machine’s language abilities?

Natural Language Understanding

My previous work at the International Computer Science Institute involved building a modular framework for natural language understanding––producing action from language.

Relevant papers: 
Trott, S., Appriou, A., Feldman, J., & Janin, A. (2015, September). Natural language understanding and communication for multi-agent systems. In AAAI Fall Symposium (pp. 137-141). [Link] [Code]