How do comprehenders understand what a speaker intended, if what they said is ambiguous or has multiple interpretations? Specifically, what information can they exploit to reduce the uncertainty of an utterance’s intended meaning, including: how it is said (prosody), gestural cues (eye gaze), and inferences about the speaker’s knowledge states? And more broadly, to what extent is pragmatic inference tied to mental state inference?
The relationship between words and their meaning is mostly arbitrary, but there are systematic or statistical trends in form-meaning pairings. Do languages vary in their systematicity, and if so, why? Furthermore, to what extent is systematicity in a language internalized by its speakers, and how does this affect word learning and speech production/comprehension?
Trust and Human-Robot Interaction
As smart technology becomes more pervasive, it’s important that we develop the right levels of trust in our machines; we should trust them to do what they can do, but we shouldn’t assume that they can do something they can’t. I hope to address this problem by answering a couple of questions:
- What factors (behaviors, anthropomorphism, interactions, etc.) of a machine affect the degree of trust we place in it, and the expertise judgments we make about it? What can we learn about judgments of human expertise?
- Can we design machines to efficiently calibrate the appropriate level of trust in their users?
Natural Language Understanding
My previous work at the International Computer Science Institute involved building a general framework for natural language understanding – producing action from language. In future work, I’d like to explore the possibility of learning constructions dynamically, so as to make this system more flexible.