A recurring question in both scientific and public discourse is whether any given property of an organism is innate or learned. This debate, usually framed in terms of Nature vs. Nurture, often centers around properties of human behavior and cognition: intelligence, language, morality, mathematics, and so on. But while this dichotomous framing perhaps seems obvious to us now, when did the question first arise? And is it really the best way to investigate these properties?
Origins of the Debate
The notion of innateness as regarding intelligence and knowledge can be traced back at least to Plato (and is probably much older). In Meno, Plato argues – through the mouthpiece of Socrates – that human knowledge is essentially prewired. Rather than learn, humans remember; that is, because our soul is actually immortal, the process we usually refer to as knowledge acquisition is better characterized as knowledge recollection. Socrates demonstrates this by questioning a young slave on his knowledge of geometry. Through a series of well-designed questions, Socrates elicits the correct answers from the boy, as if the boy had known them all along.
The idea of “learning as remembering” probably seems odd, particularly as Plato’s account is wrapped up in a notion of an eternal soul. But the concept of innateness has persisted through the ages, recurring in the writings of Descartes (who also argued for “innate knowledge”) and Leibniz (who argued that certain logical or mathematical truths are innate), and eventually manifesting in scientific discourse. Noam Chomsky’s Universal Grammar is one particularly well-known example of an argument for innate knowledge. Specifically, Chomsky argues that human language is too complex to learn purely through simple associative mechanisms, considering the sparsity of informative input or corrections to children learning language – this is sometimes called the poverty of the stimulus (POTS) argument – and that at least some part of human language must be innate. That is: 1) infants don’t seem to require explicit instruction to learn language; and 2) they rarely receive examples of “incorrect” input; and yet 3) they seem to acquire a pretty solid grasp of language nonetheless. Thus, Chomsky argues, something must be innate or pre-wired into the human brain – a language-specific faculty.
The alternative approach – sometimes referred to as Empiricist – takes the position that human knowledge arises from our experiences in the world. Perhaps most famously, in an Essay Concerning Human Understanding (1689), John Locke wrote that the human mind is born as a “blank slate”; constructs such as knowledge, personality, intelligence, etc., are shaped via a person’s sensorimotor experiences. Like the idea of innateness, the notion of a “blank slate” is quite old, and has also heavily influenced scientific paradigms. In contrast to Chomsky’s Universal Grammar, various linguists and psychologists have argued that humans acquire language primarily through statistical learning – that, instead of having a language “module” pre-wired with certain grammatical rules, etc., we come to associate different linguistic elements (sounds, words, etc.) with each other and with elements in the “real world” (objects, actions, social situations, etc.). Indeed, infants seem to be able to learn word boundaries through exposure (Saffran et al, 1996), and even early artificial neural networks could learn an approximation of simple grammars (Elman, 1993). Under this explanation, humans didn’t necessarily evolve a language-specific faculty per se; rather, language is learned using the same associative mechanisms required for learning other skills and knowledge.
So clearly, there are different camps. Some people think certain things are innate; other people think they are learned. You might reply: “But academia is littered with dichotomous ideologies! Is this really a problem? And even if it’s dichotomously framed, surely at least scientists in the field are aware of the nuance, right?”
One of the biggest problems I see with this debate is with its terminological ambiguity.
Ask yourself this question: What does it mean to be “innate”?
Does “innate” mean biological? That is, if something – such as language – seems to have a biological basis, does that make it innate? To me, that seems to miss the point. As biological organisms, everything we do is biological. So it’s hardly a revelation to say that language, or some other human ability, has a biological basis.
Does “innate” mean genetic? Obviously there isn’t a single gene for language (or something even vaguer and hard to pin down like “intelligence”), but to what extent can these abilities or features be correlated with particular genotypes? Finding these correlations is certainly interesting, and will definitely inform our understanding of how particular cognitive abilities arise, but genes don’t exist in a vacuum. During development there’s a complex interplay both within the genome, and between the genome, the developing organism, and the environment – and importantly, this development never really stops.
And it’s not just that the answer is “both” (re: Nature vs. Nurture). Because while this is also obviously true, people still want to know: What is the “contribution” of each side? But it’s actually very difficult to draw a line in the sand between these sides in the first place – meaning that defining the contribution of each side is also very challenging.
A better approach, it seems to me, is to first carefully define the cognitive ability you’re interested in (e.g. what do we mean by “language”?), then begin identifying which other cognitive abilities seem to be involved or necessary. An example of this approach is Rafael Nuñez’s work with numerical cognition. Recently, Nuñez published a piece in Trends in Cognitive Sciences (Nuñez, 2017) arguing that the current debate on the origins of “Number” in humans is very misleading. Part of this stems from ambiguity and carelessness regarding what we mean when we say “number”; to address this, Nuñez establishes a set of criterion for what exactly “number” is. He then identifies which other abilities seem to be necessary, some of which are shared with other animals, some of which are present even in very young infants, and others of which are not even universal among humans (e.g. writing systems). Rather than arguing that “number” is somehow innate, he demonstrates that certain preconditions for number might be evolved, but that others are the product of culture. Note that this isn’t explicitly framed in terms of Nature vs. Nurture; it’s an account of how a given cognitive function (numerical cognition) developed, with a description of how different mechanisms facilitated this development.
Why It’s a Problem
The first reason this dichotomous framing is problematic is because it seeps into the public discourse surrounding an issue. At best, this results in the spread of misinformation, and the public is misled. But at worst, it mutates into dangerous notions like eugenics, or the idea that our genes somehow determine “who we are”.
But I believe this isn’t just a problem of miscommunication between the scientific community and the public. This framing impacts even how the scientific community – who, by all accounts, should be aware of the nuance – think about certain issues. Dichotomous framing of hypotheses isn’t always bad, at least if research can be done to push our models one direction or another (Platt, 1964), but it can also lead to in-group divisions and misinterpretations of particular viewpoints. The Nature vs. Nurture dichotomy is so firmly engrained that some might have a difficult time extracting themselves from it. This influences the direction of their research and how they react to other research (which they might perceive as “opposing” their camp), as well as the state of the field more generally – potentially causing a discipline to stagnate entirely.
Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48(1), 71-99.
Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274(5294), 1926-1928.
Núñez, R. E. (2017). Is there really an evolved capacity for number?. Trends in cognitive sciences, 21(6), 409-424.
Platt, J. R. (1964). Strong inference. science, 146(3642), 347-353.
 Of course, this mechanism may be particularly well-adapted to learning language – though note that this is distinct from being a domain-selective adaptation. This distinction is why the two “sides” of the debate (Nativists vs. Non-nativists) tend to fall into similarly opposing camps on another debate: is the mind/brain modular or distributed? If humans evolved a language-specific faculty in the brain, it suggests a kind of inherent modularity to language in the brain; if they didn’t, and there’s a more general mechanism for learning and forming associations, it suggests a more distributed representation of language and language use.
 FOXP2, sometimes inaccurately called the ‘language gene’, seems to be involved in coordinating complex motor movements (such as the movements of the tongue during speech articulation); mutations of the gene are associated with speech disorders.
 This is, I believe, a subcase of a more general problem in how scientific questions are sometimes framed.