In any scientific field, researchers construct theoretical models of the phenomenon under consideration. Often, these models employ some sort of metaphor to ground the phenomenon in more familiar language. For example, biologists sometimes describe the immune system in terms of lock-and-key dynamics, cells in terms of factories, and DNA in terms of a language (Brown, 2003).
In a way, these metaphors serve almost as “mini-paradigms” (Kuhn, 1962), providing a common vocabulary with which a given community of practice can characterize and describe their domain of interest. Metaphors can also shift within and across slices of time, likely reflecting modes of thought in the culture at large. For example, metaphors for the mind have often invoked the dominant technology of the time, including parchment, clocks, electrical signals, telephones, centrifugal governors, and computers. And currently, Cognitive Science enjoys––for better or for worse––many competing explanatory paradigms, ranging from the “Bayesian brain” to dynamical systems (Kallens & Dale, 2018).
Critically, just as has been demonstrated in other domains, scientific metaphors can systematically influence the way that people think about a scientific concept (Gentner & Gentner, 1983). This provides a more mechanistic explanation for a weak version of the incommensurability of paradigms that Kuhn (1962) described––that is, scientific communities that rely upon different metaphors (or paradigms) sometimes face challenges reconciling two fundamentally different ways of looking at the world.
One particularly frequent metaphor, which pervades practically all scientific fields, is the ascription of agency to entities typically conceived of as non-agentive.
Agentivity in Science
At its core, science is about building explanations. These explanations are often meant to be causal, e.g. “X happens, which causes Y”. The process of characterizing (and inferring) causal relationships obviously varies across domains, but one consistent factor is our reliance on agentive descriptions. For some reason, humans have a tough time talking about causation without resorting to metaphors for agentivity.
Once we start looking for them, we see that these metaphors pervade all scientific fields, no matter how “hard” or “soft”, across all units of analysis:
- Physical objects (or even particles!) are described as obeying certain laws or even trying to accomplish certain goals;
- Entire books have been written––and paradigms constructed––about the selfishness of genes;
- Evolutionary processes are often characterized teleologically, e.g. as optimizing for particular desired outcomes;
- Economic outcomes are described in terms of market forces, or even as guided by an invisible hand;
- Social outcomes (E.g. economic inequality, etc.) are attributed to social or systemic forces, which exert a real, causal influence on individuals;
- Much of the Behaviorist debate in early to mid-20th century Psychology is rooted in the desire to not ascribe agency to entities, e.g. to resist describing the innards of “the black box”.
The list goes on.
One argument that I’ve encountered, and to which Riskin (2016) provides an apt retort, is that while we talk about processes in terms of agentivity and intentionality, we don’t really mean it that way. These metaphors are stand-ins, essentially, which we use to simplify our casual conversations––and we take it on faith that a non-agentive explanation will come along at some unspecified later date. And of course, we never use this language in “real” scientific publications.
But I think there are two retorts to this argument. First, what is scientific discourse? Is it just the body of published papers on a topic, or is it the sum total of ways in which a community of practicing scientist-humans communicate about that topic, and even think about that topic? Because I’d argue that these “casual” agentive construals also influence how we think about a domain, and presumably influence how we investigate that domain. Second, as Riskin (2016) notes, scientists do still use agentive language in published papers––it’s just that there’s a hierarchy of acceptability regarding which causal verbs are permitted. So biologists can write about genes regulating or controlling particular processes, which surely implies or at least inherits a sense of agency, but they can’t write about genes wanting processes to go a certain way.
I think part of the challenge comes from language itself: causation and agentivity are intimately connected even in the way we speak, making it difficult to disentangle the two.
Language and Causality
Language is generally used to describe events and their participants. These events are often (but not always) causal, e.g. one participant is making something happen to (or with) another participant. English, for example, has many verbs that canonically express causal relationships, in which an “agent” does something to a “patient” (e.g. kick, shove, push, bite, etc.) sometimes resulting in a “change of state” (E.g. John knocked the cup off the table).
The framework of construction grammar (Goldberg, 2006) holds that this information––who did what to whom––is provided not only by verbs, but by particular grammatical constructions. A construction is defined as a consistent pairing of grammatical forms with particular meanings. For example, transitive constructions in English typically convey a causal relationship. This is true both for constructions containing a canonically causal verb:
John knocked the cup off the table.
And also for constructions containing verbs usually not used to express causation:
John sneezed the napkin off the table.
It’s not like the verb “sneeze” implies that the “sneezing” action is being applied to some patient. But it is possible for the expulsion of air to produce observable changes in the world. Thus, we can imagine a situation in which John’s sneeze causes the napkin to fall off the table; this information isn’t encoded in the individual words in the sentence, but rather in the construction.
What does this have to do with agentivity? My point is that agentivity is deeply embedded in the way we talk about causation. In turn, this suggests that when we deploy these same constructions (e.g. a transitive verb phrase) to discuss or posit abstract relationships between theoretical entities, it’s possible that we’re imbuing those entities with some degree of agentivity. For example, we sometimes describe systemic, large-scale “forces” this way:
The market guides business growth with an invisible hand.
Poverty pushes its victims into crime.
And this isn’t just about systemic forces. We deploy the same constructions (and ascribe the same sense of agency) when describing causal interactions at the molecular level:
The immune system captures and neutralizes pathogen threats.
Sufficient input pushes an entire neural network into action.
Genes guide behavior.
Of course, few scientists would imbue constructs like “the market” or “genes” with anything like intentionality. It’s just that it’s very hard––impossible, perhaps––to describe causal relationships without recourse to constructions that convey causality, and these constructions frequently also imply a sense of agency. The extent to which these metaphors influence our way of thinking is a matter of empirical investigation; maybe they affect us in subtle, hard-to-detect ways, or maybe they really are just “stand-ins”. Either way, it seems like agency and causality are intertwined in our language, and as a consequence, may be intertwined in our scientific models.
The lingering question of “why”
Children (and adults) sometimes play a game of “why”. You start with a simple question, like “why are there yellow lines on the middle of the road?” With each subsequent answer, you continue to ask why: Why do we need to separate the lanes? Why might drivers run into each other? Left unchecked, these questions escalate in scope over time: Why do we drive places? Why do we work? Why do we live?
The lesson––if one can be found––is that all answers pose another question. There’s no firm ground, no “base”, on which everything resides. In principle, some scientists and philosophers might argue that explanations for how reduce to physics as their “base” level. But when it comes to why, we either:
- Make explicit reference to agency (e.g. “X wants to do Y in order to Z”);
- “Offload” the issue of agency onto some other construct (e.g. “The genetic drive for replication causes X to do Y.”);
- Ignore the question altogether.
In The Restless Clock, Riskin (2016) notes that most modern scientists do something like (3); science purports to deliver mechanistic explanations, in which actions and behavior are described in terms of interacting component parts or processes, with no reference to agency. The generally accepted approach is to describe biological organisms in terms of passive machinery. But she argues that at least historically, this approach was rooted in an appeal to the “Divine” as a source of agency. She points to the origin point of this “offloading” as occurring throughout the 17th and 18th centuries: for example, William Coward described humans as “pieces of mechanism”, constructed by an Almighty God, Robert Boyle referred to human bodies as “living automatons’, and Henry More argued that the perfection of the human form necessarily implied an intelligent designer. By construing God as the “base” level of explanation for why, all other behavior could be characterized in purely mechanistic terms.
Riskin’s point (it seems to me) isn’t necessarily that scientists are somehow implicitly continuing this paradigm. Rather, it’s that the paradigm in which we operate––that of “passive” mechanism––has its roots in a form of explanation that most scientists would explicitly reject.
The Takeaway (?)
In this post, I’ve tried to make the following points. First, practically all scientific fields attempt to construct causal explanations for phenomena in the world. Second, the way we talk about causality is intimately connected to the way we talk about agency; this might result in ascribing agency to processes or entities we don’t actually think of as genuinely agentive. And third, explanations of phenomena in terms of “passive mechanism” rarely, if ever, answer the question of why––precisely because these mechanistic explanations were made possible, historically, by offloading the question of why onto a Supreme Being.
I’m not really sure whether any of this “means” anything. This post is largely my attempt to connect a few things I’ve been thinking about. That said, here’s my attempt at a takeaway:
Is it problematic that agentive language pervades scientific descriptions of purportedly non-agentive processes? Personally, my opinion is that it’s not. These metaphors are models, like anything else. But I do think that it’s important to recognize and be aware of them, both as practitioners and public communicators. Agentive metaphors obviously have their place in scientific discourse, so long as we remember that the map is truly just a map.
Brown, T. L. (2003). Making truth: Metaphor in science. University of Illinois Press.
Dale, R., Dietrich, E., & Chemero, A. (2009). Explanatory pluralism in cognitive science. Cognitive science, 33(5), 739-742.
Kallens, P. C., & Dale, R. (2018). Exploratory mapping of theoretical landscapes through word use in abstracts. Scientometrics, 116(3), 1641-1674.
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago press.
Gentner, D., & Gentner, D. R. (1983). Flowing waters or teeming crowds: Mental models of electricity. Mental models, 99-129.
Goldberg, A. E. (2006). Construction grammar. Encyclopedia of cognitive science.
Hauser, D. J., & Schwarz, N. (2015). The war on prevention: Bellicose cancer metaphors hurt (some) prevention intentions. Personality and Social Psychology Bulletin, 41(1), 66-77.
Hendricks, R., & Boroditsky, L. (2015). New space-time metaphors foster new mental representations of time. In CogSci.
Riskin, J. (2016). The restless clock: a history of the centuries-long argument over what makes living things tick. Chicago, IL: University of Chicago Press.
Thibodeau, P. H., & Boroditsky, L. (2013). Natural language metaphors covertly influence reasoning. PloS one, 8(1), e52961.
 And recent approaches in Artificial Intelligence, in turn, attempt to model particular abstractions of “neural computation” (e.g. Hebbian learning). This leads to an interesting reverberation of construals across domains: artificial neural networks are developed to approximate brain computations; brain computations are then modeled as increasingly complex artificial neural networks.
 A by-no-means comprehensive list includes: crime (Thibodeau & Boroditsky, 2013), time (Hendricks & Boroditsky, 2015), and cancer (Hauser & Schwarz, 2015).
 And this isn’t necessarily the sort of challenge that always can be solved with data, because the problem runs deeper than competing hypotheses. This is why two communities end up “talking past” each other, even when both considering the same set of evidence. Rather than attempting to “win”, some (Dale et al, 2009) have argued for an “Explanatory pluralist” approach, particularly in studies of the mind––in which we allow for multiple, sometimes incompatible perspectives, which shed light on the same “question” in different ways.
 As in the level of detail that a causal relationship is being described––e.g. the “atomic unit” of a domain.
 A construal which in turn has led to misunderstandings and faulty generalizations––e.g. “our genes are selfish, therefore so are we”; or “our genes are selfish, so it’s up to society to teach generosity”. This is a case of the map not only becoming the territory, but spreading insidiously across other territories.
 Of course, there are many other constructions that express causation in English and across languages. I use the transitive construction only as a well-known, relatively intuitive example.
 Though even then, we’ve chosen to posit some theoretical construct as our “base” level––atoms, electrons, quarks, etc.