Earning our Trust

Technology is becoming smarter. Our phones talk back to us, our Netflix accounts make custom movie recommendations, and soon enough, our cars will be able to drive themselves. Some of the decisions our machines make will be trivial (such as which movie to watch), but many will be more impactful on our lives. For example, self-driving cars will need to decide when to change lanes, which routes to take, or even how to avoid an accident. As users, we must determine whether we think these are the right decisions; thus, an essential element of our relationship with machines is the level of trust we place in those machines.

There are many ways to define “trust”, but the usage here refers to one’s estimation of a machine’s level of competence, e.g. the set of expectations we have about its abilities. Expecting too much from a machine is equivalent to over-trust, while expecting too little is equivalent to under-trust. The ideal scenario is that a user’s trust of a machine is perfectly calibrated to the machine’s actual capabilities (see the figure above); in other words, we correctly predict what the machine can do.

Under-trust renders our intelligent machines useless, and potentially even a hindrance. For example, it’s not hard to imagine a future in which self-driving cars are actually safer than human-operated vehicles, but they will never be widely accepted if people don’t trust them. Distrust of self-driving cars could also lead to people blaming them for errors, even when the error is not the vehicle’s fault.

Over-trust, on the other hand, can result in minor frustration (such as yelling at Siri when she doesn’t understand what we say) or in more extreme cases, tragedy. For instance, there are still many things that self-driving cars struggle with (and might continue to struggle with even after commercialization), such as understanding unwritten “rules of the road”. Expecting or relying on our self-driving car to understand these unwritten rules could lead to serious accidents.

So how do we convince users to trust machines more, but not too much[1]?

Finding a Balance

oped_graphic

Trust (or estimations of competence) should scale linearly with the machine’s abilities in a given domain.

I think that one solution could be the implementation of language interfaces.

A well-designed language interface should clearly communicate what a machine is good at, and what it is not good at. This doesn’t mean that the machine should literally list its strengths and weaknesses; people would quickly grow bored, and such a list could never be comprehensive anyway. Rather, the machine should use language that facilitates the correct inferences about its levels of competence. In the AI community, this is sometimes called the habitability problem – and while it’s far from solved, thoughtful design will take us much closer to the solution.

One way to approach this is by considering the content of the machine’s language interface – what it talks about, and what it doesn’t. A machine should obviously be able to fluently discuss issues directly related to its core functionality. For example, an intelligent car should be able to communicate information about the state of the car, describe observations it makes about the road and other road users, suggest actions to the driver, and answer questions about its plans. But if the driver says something ambiguous that the car doesn’t understand, it needs to acknowledge this lack of understanding, and also communicate exactly what was unclear (as best it can). Rather than reverting to a Google search (as Siri often does), the car will need to ask for clarification. Over time, this will help the driver learn the car’s limitations – in much the same way that people gauge the expertise and limitations of their interlocutors during conversation.

A language interface can also help communicate the machine’s uncertainty about a decision or upcoming problem. Intelligent technology often “thinks” in terms of probability, but humans are notoriously bad at this. The use of qualifiers or hedges in language, such as “I think…” or “It seems like…”, can help communicate to the human user that the machine is not entirely confident about its decision – meaning it might be time for the human to step in.

The Takeaway

Of course, language is not the only feature designers should worry about. There are many factors affecting the trust people place in machines [6], including the machine’s physical appearance, memories of interactions with similar machines, and obviously observations of the machine’s actual performance. But language is a central component of human interaction, and it seems like a promising avenue towards balancing the level of trust that people place in their machines.

In the meantime, we consumers should be careful just how much trust we place in our machines. Part of this comes from understanding exactly what our machines are good at, and what they aren’t good at [5]. Until language interfaces can effectively convey this information, it’s probably safest to err on the side of under-trust; at the worst, under-trust results in the disuse of a machine, whereas over-trust can have much more ominous consequences.


References:

  1.     Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. http://doi.org/10.1016/j.jesp.2014.01.005
  2.     Hu, Z., Halberg, G., Jimenez, C. R., & Walker, M. A. (2014). Entrainment in Pedestrian Direction Giving: How many kinds of entrainment? Proceedings of 5th International Workshop on Spoken Dialog Systems, 90–101.
  3.     Spaulding, S., Zhu, W., & Balicki, D. (n.d.). The Effects of Robotic Agency on Trust and Decision- Making.
  4.     Oleson, K. E., Billings, D. R., Kocsis, V., Chen, J. Y. C., & Hancock, P. A. (2011). Antecedents of trust in human-robot collaborations. 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support, CogSIMA 2011, 175–178. http://doi.org/10.1109/COGSIMA.2011.5753439
  5.     Norman, Don. 2017. Technology forces us to do things we’re bad at: time to change how design is done. FactCoDesign.com, https://www.fastcodesign.com/3067411/technology-forces-us-to-do-things-were-bad-at-time-to-change-how-design-is-done.
  6.     Hancock, P. a., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517–527. http://doi.org/10.1177/0018720811417254
  7.     Lee, J. D., & See, K. a. (2004). Trust in automation: designing for appropriate reliance. Human Factors, 46(1), 50–80. http://doi.org/10.1518/hfes.46.1.50.30392

 

[1] Re: increasing trust, some experimental research suggests that making machines more humanlike will lead us to trust them more. One study, using a driving simulator, showed that anthropomorphizing an autonomous vehicle – giving it a name, a gender, and an expressive voice – led passengers to place more trust in it. Furthermore, when the vehicle was involved in an unavoidable accident (e.g. an accident that visibly was not the vehicle’s fault), they were less likely to blame the car when it was anthropomorphized than when it had no humanlike features [1]. Other research suggests that training a machine’s language interface to talk more like the human it interacts with makes it seem more natural, helpful, and trustworthy [2].

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s