Nature and Nurture as an Enduring Tension in the History of Psychology

Nature–nurture is a dichotomous way of thinking about the origins of human (and animal) behavior and development, where “nature” refers to native, inborn, causal factors that function independently of, or prior to, the experiences (“nurture”) of the organism. In psychology during the 19th century, nature-nurture debates were voiced in the language of instinct versus learning. In the first decades of the 20th century, it was widely assumed that that humans and animals entered the world with a fixed set of inborn instincts. But in the 1920s and again in the 1950s, the validity of instinct as a scientific construct was challenged on conceptual and empirical grounds. As a result, most psychologists abandoned using the term instinct but they did not abandon the validity of distinguishing between nature versus nurture. In place of instinct, many psychologists made a semantic shift to using terms like innate knowledge, biological maturation, and/or hereditary/genetic effects on development, all of which extend well into the 21st century. Still, for some psychologists, the earlier critiques of the instinct concept remain just as relevant to these more modern usages.

The tension in nature-nurture debates is commonly eased by claiming that explanations of behavior must involve reference to both nature-based and nurture-based causes. However, for some psychologists there is a growing pressure to see the nature–nurture dichotomy as oversimplifying the development of behavior patterns. The division is seen as both arbitrary and counterproductive. Rather than treat nature and nurture as separable causal factors operating on development, they treat nature-nurture as a distinction between product (nature) versus process (nurture). Thus there has been a longstanding tension about how to define, separate, and balance the effects of nature and nurture.

Keywords

Subjects

Nature and Nurture in Development

The oldest and most persistent ways to frame explanations about the behavioral and mental development of individuals is to distinguish between two separate sources of developmental causation: (a) intrinsic, preformed, or predetermined causes (“nature”) versus (b) extrinsic, experiential, or environmental causes (“nurture”). Inputs from these two sources are thought to add their own contribution to development (see Figure 1).

Figure 1. The traditional view of nature and nurture as separate causes of development. In the traditional view, nature and nurture are treated as independent causal influences that combine during development to generate outcomes. Note that, during development, the effects of nature and nurture (shown in horizontal crossing lines) remain independent so that their effects on outcomes are theoretically separable.

Because some traits seem to derive more from one source than the other, much of the tension associated with the nature–nurture division deals with disagreements about how to balance the roles of nature and nurture in the development of a trait.

Evidence of Nature in Development

Evidence to support the nature–nurture division usually derives from patterns of behavior that suggest a limited role of environmental causation, thus implying some effect of nature by default. Table 1 depicts some common descriptors and conditions used to infer that some preference, knowledge, or skill is nature based.

Table 1. Common Descriptors and Associated Conditions for Inferring the Effects of Nature on Development

Innate or unlearned

Displayed in the absence of relevant experience

Preparedness for learning

Rapidly or easily learned

Constraints on learning

Difficult or impossible to learn

Found in all like members of a species

Difficult to modify following its appearance

Emerges in an orderly sequence or at a specific time

Runs in families or with degrees of kinship

It is important to reiterate that nature-based causation (e.g., genetic determination) is inferred from these observations. Such inferences can generate tension because each of the observations listed here can be explained by nurture-based (environmental) factors. Confusion can also arise when evidence of one descriptor (e.g., being hereditary) is erroneously used to justify a different usage (e.g., that the trait is unlearned).

The Origins of Nature Versus Nurture

For much of recorded history, the distinction between nature and nurture was a temporal divide between what a person is innately endowed with at birth, prior to experience (nature), and what happens thereafter (nurture). It was not until the 19th century that the temporal division was transformed into a material division of causal influences (Keller, 2010). New views about heredity and Darwinian evolution justified distinguishing between native traits and genetic causes from acquired traits and environmental causes. More so than before, the terms nature and nurture were often juxtaposed in an opposition famously described by Sir Francis Galton (1869) as that between “nature versus nurture.”

Galton began writing about heredity in the mid-1860s. He believed we would discover laws governing the transmission of mental as well as physical qualities. Galton’s take on mental heredity, however, was forged by his desire to improve the human race in a science he would later call “eugenics.” In the mid- 19th century , British liberals assumed humans were equivalent at birth. Their social reform efforts were geared to enhancing educational opportunities and improving living conditions. Galton, a political conservative, opposed the notion of natural equality, arguing instead that people were inherently different at birth (Cowan, 2016), and that these inherited mental and behavioral inequalities were transmitted through lineages like physical qualities. Because Galton opposed the widely held Lamarckian idea that the qualities acquired in one’s lifetime could modify the inherited potential of subsequent generations, he believed long-lasting improvement of the human stock would only come by controlling breeding practices.

To explain the biological mechanisms of inheritance, Galton joined a growing trend in the 1870s to understand inheritance as involving the transmission of (hypothetical) determinative, germinal substances across generations. Foreshadowing a view that would later become scientific orthodoxy, Galton believed these germinal substances to be uninfluenced by the experiences of the organism. His theory of inheritance, however, was speculative. Realizing he was not equipped to fully explicate his theory of biological inheritance, Galton abandoned this line of inquiry by the end of that decade and refocused his efforts on identifying statistical laws of heredity of individual differences (Renwick, 2011).

Historians generally agree that Galton was the first to treat nature (as heredity) and nurture (everything else) as separate causal forces (Keller, 2010), but the schism gained biological legitimacy through the work of the German cytologist Auguste Weismann in the 1880s. Whereas Galton’s theory was motivated by his political agenda, Weismann was motivated by a scientific, theoretical agenda. Namely, Weismann opposed Lamarckian inheritance and promoted a view of evolution driven almost entirely by natural selection.

Drawing upon contemporary cytological and embryological research, Weismann made the case that the determinative substances found in the germ cells of plants and animals (called the “germ-plasm”) that are transmitted across generations were physically sequestered very early in embryogenesis and remained buffered from the other cells of the body (“somato-plasm”). This so-called, Weismann’s barrier meant that alterations in the soma that develop in the lifetime of the organism through the use or disuse of body parts would not affect the germinal substances transmitted during reproduction (see Winther, 2001, for review). On this view, Lamarckian-style inheritance of acquired characteristics was not biologically possible.

Galton and Weismann’s influence on the life sciences cannot be overstated. Their work convinced many to draw unusually sharp distinctions between the inherited (nature) and the acquired (nurture). Although their theories were met with much resistance and generated significant tension in the life sciences from cytology to psychology, their efforts helped stage a new epistemic space through which to appreciate Mendel’s soon to be rediscovered breeding studies and usher in genetics (Muller-Wille & Rheinberger, 2012).

Ever since, psychology has teetered between nature-biased and nurture-biased positions. With the rise of genetics, the wedge between nature–nurture was deepened in the early to mid- 20th century , creating fields of study that focused exclusively on the effects of either nature or nurture.

The “Middle Ground” Perspective on Nature–Nurture

Twenty-first-century psychology textbooks often state that the nature–nurture debates have been resolved, and the tension relaxed, because we have moved on from emphasizing nature or nurture to appreciating that development necessarily involves both nature and nurture. In this middle-ground position, one asks how nature and nurture interact. For example, how do biological (or genetic) predispositions for behaviors or innate knowledge bias early learning experiences? Or how might environmental factors influence the biologically determined (maturational) unfolding of bodily form and behaviors?

Rejection of the Nature–Nurture Divide

For some, the “middle-ground” resolution is as problematic as “either/or” views and does not resolve a deeper source of tension inherent in the dichotomy. On this view, the nature–nurture divide is neither a legitimate nor a constructive way of thinking about development. Instead, developmental analysis reveals that the terms commonly associated with nature (e.g., innate, genetic, hereditary, or instinctual) and nurture (environmental or learned) are so entwined and confounded (and often arbitrary) that their independent effects cannot be meaningfully discussed. The nature–nurture division oversimplifies developmental processes, takes too much for granted, and ultimately hinders scientific progress. Thus not only is there a lingering tension about how to balance the effects of nature and nurture in the middle-ground view, but there is also a growing tension to move beyond the dichotomous nature–nurture framework.

Nativism in Behavior: Instincts

Definitions of instinct can vary tremendously, but many contrast (a) instinct with reason (or intellect, thought, will), which is related to but separable from contrasting (b) instinct with learning (or experience or habit).

Instinct in the Age of Enlightenment

Early usages of the instinct concept, following Aristotle, treated instinct as a mental, estimative faculty (vis aestimativa or aestimativa naturalis) in humans and animals that allowed for the judgments of objects in the world (e.g., seeing a predator) to be deemed beneficial or harmful in a way that transcends immediate sensory experience but does not involve the use of reason (Diamond, 1971). In many of the early usages, the “natural instinct” of animals even included subrational forms of learning.

The modern usage of instincts as unlearned behaviors took shape in the 17th century . By that point it was widely believed that nature or God had implanted in animals and humans innate behaviors and predispositions (“instincts”) to promote the survival of the individual and the propagation of the species. Disagreements arose as to whether instincts derived from innate mental images or were mindlessly and mechanically (physiologically) generated from innately specified bodily organization (Richards, 1987).

Anti-Instinct Movement in the Age of Enlightenment

Challenges to the instinct concept can be found in the 16th century (see Diamond, 1971), but they were most fully developed by empiricist philosophers of the French Sensationalist tradition in the 18th century (Richards, 1987). Sensationalists asserted that animals behaved rationally and all of the so-called instincts displayed by animals could be seen as intelligently acquired habits.

For Sensationalists, instincts, as traditionally understood, did not exist. Species-specificity in behavior patterns could be explained by commonalities in physiological organization, needs, and environmental conditions. Even those instinctual behaviors seen at birth (e.g., that newly hatched chicks peck and eat grain) might eventually be explained by the animal’s prenatal experiences. Erasmus Darwin ( 1731–1802 ), for example, speculated that the movements and swallowing experiences in ovo could account for the pecking and eating of grain by young chicks. The anti-instinct sentiment was clearly expressed by the Sensationalist Jean Antoine Guer ( 1713–1764 ), who warned that instinct was an “infantile idea” that could only be held by those who are ignorant of philosophy, that traditional appeals to instincts in animals not only explained nothing but served to hinder scientific explanations, and that nothing could be more superficial than to explain behavior than appealing to so-called instincts (Richards, 1987).

The traditional instinct concept survived. For most people, the complex, adaptive, species-specific behaviors displayed by naïve animals (e.g., caterpillars building cocoons; infant suckling behaviors) appeared to be predetermined and unlearned. Arguably as important, however, was the resistance to the theological implications of Sensationalist philosophy.

One of the strongest reactions to Sensationalism was put forward in Germany by Herman Samuel Reimarus ( 1694–1768 ). As a natural theologian, Reimarus, sought evidence of a God in the natural world, and the species-specific, complex, and adaptive instincts of animals seemed to stand as the best evidence of God’s work. More so than any other, Reimarus extensively catalogued instincts in humans and animals. Rather than treat instincts as behaviors, he defined instincts as natural impulses (inner drives) to act that were expressed perfectly, without reflection or practice, and served adaptive goals (Richards, 1987). He even proposed instincts for learning, a proposal that would resurface in the mid- 20th century , as would his drive theory of instinct (Jaynes & Woodward, 1974).

Partly as a result of Reimarus’ efforts, the instinct concept survived going into the 19th century . But many issues surrounding the instinct concept were left unsettled. How do instincts differ from reflexive behaviors? What role does learning play in the expression of instincts, if any? Do humans have more or fewer instincts than animals? These questions would persist well into the first decades of the 20th century and ultimately fuel another anti-instinct movement.

Instinct in the 19th Century

In the 19th century , the tension about the nature and nurture of instincts in the lifetime of animals led to debates about the nature and nurture of instincts across generations. These debates dealt with whether instincts should be viewed as “inherited habits” from previous generations or whether they result from the natural selection. Debating the relative roles of neo-Lamarckian use-inheritance versus neo-Darwinian natural selection in the transmutation of species became a significant source of tension in the latter half of the 19th century . Although the neo-Lamarckian notion of instincts as being inherited habits was rejected in the 20th century , it has resurged in recent years (e.g., see Robinson & Barron, 2017).

Darwinian evolutionary theory required drawing distinctions between native and acquired behaviors, and, perhaps more so than before, behaviors were categorized along a continuum from the purely instinctive (unlearned), to the partially instinctive (requiring some learning), to the purely learned. Still, it was widely assumed that a purely instinctive response would be modified by experience after its first occurrence. As a result, instinct and habit were very much entangled in the lifetime of the organism. The notion of instincts as fixed and unmodifiable would not be widely advanced until after the rise of Weismann’s germ-plasm theory in the late 19thcentury .

Given their importance in evolutionary theory, there was greater interest in more objectively identifying pure instincts beyond anecdotal reports. Some of the most compelling evidence was reported by Douglas Spalding ( 1844–1877 ) in the early 1870s (see Gray, 1967). Spalding documented numerous instances of how naïve animals showed coordinated, seemingly adaptive responses (e.g., hiding) to objects (e.g., sight of predators) upon their first encounter, and he helped pioneer the use of the deprivation experiment to identify instinctive behaviors. This technique involved selectively depriving young animals of seemingly critical learning experiences or sensory stimulation. Should animals display some species-typical action following deprivation, then, presumably, the behavior could be labeled as unlearned or innate. In all, these studies seemed to show that animals displayed numerous adaptive responses at the very start, prior to any relevant experience. In a variety of ways, Spalding’s work anticipated 20th-century studies of innate behavior. Not only would the deprivation experiment be used as the primary means of detecting native tendencies by European zoologists and ethologists, but Spalding also showed evidence of what would later be called imprinting, critical period effects and evidence of behavioral maturation.

Reports of pure instinct did not go unchallenged. Lloyd Morgan (1896) questioned the accuracy of these reports in his own experimental work with young animals. In some cases, he failed to replicate the results and in other cases he found that instinctive behaviors were not as finely tuned to objects in the environment as had been claimed. Morgan’s research pointed to taking greater precision in identifying learned and instinctive components of behavior, but, like most at the turn of the 20th century , he did not question that animal behavior involved both learned and instinctive elements.

A focus on instinctive behaviors intensified in the 1890s as Weismann’s germ-plasm theory grew in popularity. More so than before, a sharp distinction was drawn between native and acquired characteristics, including behavior (Johnston, 1995). Although some psychologists continued to maintain neo-Lamarckian notions, most German (Burnham, 1972) and American (Cravens & Burnham, 1971) psychologists were quick to adopt Weismann’s theory. They envisioned a new natural science of psychology that would experimentally identify the germinally determined, invariable set of native psychological traits in species and their underlying physiological (neural) basis. However, whereas English-speaking psychologists tended to focus on how this view impacted our understanding of social institutions and its social implications, German psychologists were more interested in the longstanding philosophical implications of Weismann’s doctrine as it related to the differences (if any) between man and beast (Burnham, 1972).

Some anthropologists and sociologists, however, interpreted Weismann’s theory quite differently and used it elevate sociology as its own scientific discipline. In the 1890s, the French sociologist Emil Durkheim, for example, interpreted Weismann’s germinal determinants as a generic force on human behavior that influenced the development of general predispositions that are molded by the circumstances of life (Meloni, 2016). American anthropologists reached similar conclusions in the early 20th century (Cravens & Burnham, 1971). Because Weismann’s theory divorced biological inheritance from social inheritance, and because heredity was treated as a generic force, sociologists felt free to study social (eventually, “cultural”) phenomena without reference to biological or psychological concerns.

Anti-Instinct Movement in the 1920s

Despite their differences, in the first two decades of the 20th century both psychologists and sociologists generally assumed that humans and animals had some native tendencies or instincts. Concerns were even voiced that instinct had not received enough attention in psychology. Disagreements about instincts continued to focus on (the now centuries old debates of) how to conceptualize them. Were they complex reflexes, impulses, or motives to act, or should instinct be a mental faculty (like intuition), separate from reasoning and reflex (Herrnstein, 1972)?

In America, the instinct concept came under fire following a brief paper in 1919 by Knight Dunlap titled “Are There Any Instincts?” His primary concern dealt with teleological definitions of instincts in which an instinct referred to all the activities involved in obtaining some end-state (e.g., instincts of crying, playing, feeding, reproduction, war, curiosity, or pugnacity). Defined in this way, human instincts were simply labels for human activities, but how these activities were defined was arbitrarily imposed by the researchers. Is feeding, for instance, an instinct, or is it composed of more basic instincts (like chewing and swallowing)? The arbitrariness of classifying human behavior had led to tremendous inconsistencies and confusion among psychologists.

Not all of the challenges to instinct dealt with its teleological usage. Some of the strongest criticisms were voiced by Zing-Yang Kuo throughout the 1920s. Kuo was a Chinese animal psychologist who studied under Charles Tolman at the University of California, Berkeley. Although Kuo’s attacks on instinct changed throughout the 1920s (see Honeycutt, 2011), he ultimately argued that all behaviors develop in experience-dependent ways and that appeals to instinct were statements of ignorance about how behaviors develop. Like Dunlap, he warned that instincts were labels with no explanatory value. To illustrate, after returning to China, he showed how the so-called rodent-killing instinct in cats often cited by instinct theorists is not found in kittens that are reared with rodents (Kuo, 1930). These kittens, instead, became attached to the rodents, and they resisted attempts to train rodent-killing. Echoing the point made by Guer, Kuo claimed that appeals to instinct served to stunt scientific inquiry into the developmental origins of behavior.

But Kuo did not just challenge the instinct concept. He also argued against labeling behaviors as “learned.” After all, whether an animal “learns” depends on the surrounding environmental conditions, the physiological and developmental status of the animal, and, especially, the developmental (or experiential) history of that animal. Understanding learning also required developmental analysis. Thus Kuo targeted the basic distinction between nature and nurture, and he was not alone in doing so (e.g., see Carmichael, 1925), but his call to reject it did not spread to mainstream American psychologists.

By the 1930s, the term instinct had fallen into disrepute in psychology, but experimental psychologists (including behaviorists) remained committed to a separation of native from acquired traits. If anything, the dividing line between native and acquired behaviors became more sharply drawn than before (Logan & Johnston, 2007). For some psychologists, instinct was simply rebranded in the less contentious (but still problematic) language of biological drives or motives (Herrnstein, 1972). Many other psychologists simply turned to describing native traits as due to “maturation” and/or “heredity” rather than “instinct.”

Fixed Action Patterns

The hereditarian instinct concept received a reboot in Europe in the 1930s with the rise of ethology led by Konrad Lorenz, Niko Tinbergen, and others. Just as animals inherit organs that perform specific functions, ethologists believed animals inherit behaviors that evolved to serve adaptive functions as well. Instincts were described as unlearned (inherited), blind, stereotyped, adaptive, fixed action patterns, impervious to change that are initiated (released) by specific stimuli in the environment.

Ethologists in 1930s and 1940s were united under the banner of innateness. They were increasingly critical of the trend by American psychologists (i.e., behaviorists) to focus on studying on how a limited number of domesticated species (e.g., white rat) responded to training in artificial settings (Burkhardt, 2005). Ethologists instead began with rich descriptions of animal behavior in more natural environments along with detailed analyses of the stimulus conditions that released the fixed action patterns. To test whether behavioral components were innate, ethologists relied primarily on the deprivation experiment popularized by Spalding in the 19th century . Using these methods (and others), ethologists identified numerous fascinating examples of instinctive behaviors, which captured mainstream attention.

In the early 1950s, shortly after ethology had gained professional status (Burkhardt, 2005), a series of challenges regarding instinct and innateness were put forth by a small cadre of North American behavioral scientists (e.g., T. C. Schneirla, Donald Hebb, Frank Beach). Arguably the most influential critique was voiced by comparative psychologist Daniel Lehrman (1953), who presented a detailed and damning critique of deprivation experiments on empirical and logical grounds. Lehrman explained that deprivation experiments isolate the animal from some but not all experiences. Thus deprivation experiments simply change what an animal experiences rather than eliminating experience altogether, and so they cannot possibly determine whether a behavior is innate (independent of experience). Instead, these experiments show what environmental conditions do not matter in the development of a behavior but do not speak to what conditions do matter.

Lehrman went on to argue that the whole endeavor to identify instinctive or innate behavior was misguided from the start. All behavior, according to Lehrman, develops from a history of interactions between an organism and its environment. If a behavior is found to develop in the absence of certain experiences, the researcher should not stop and label it as innate. Rather, research should continue to identify the conditions under which the behavior comes about. In line with Kuo, Lehrman repeated the warning that to label something as instinctive (or inherited or maturational) is a statement of ignorance about how that behavior develops and does more to stunt than promote research.

Lehrman’s critique created significant turmoil among ethologists. As a result, ethologists took greater care in using the term innate, and it led to new attempts to synthesize or re-envision learning and instinct.

Some of these attempts focused on an increased role for learning and experience in the ontogeny of species-typical behaviors. These efforts spawned significant cross-talk between ethologists and comparative psychologists to more thoroughly investigate behavioral development under natural conditions. Traditional appeals to instinct and learning (as classical and operant conditioning) were both found to be inadequate for explaining animal behavior. In their stead, these researchers focused more closely on how anatomical, physiological, experiential, and environmental conditions influenced the development of species-typical behaviors.

Tinbergen (1963) was among those ethologists who urged for greater developmental analysis of species-typical behaviors, and he included it as one of his four problems in the biological study of organisms, along with causation (mechanism), survival value (function), and evolution. Of these four problems, Tinbergen believed ethologists were especially well suited to study survival value, which he felt had been seriously neglected (Burkhardt, 2005).

The questions of survival value coupled with models of population genetics would gain significant momentum in the 1960s and 1970s in England and the United States with the rise of behavioral ecology and sociobiology (Griffiths, 2008). But because these new fields seemed to promote some kind of genetic determinism in behavioral development, they were met with much resistance and reignited a new round of nature–nurture debates in the 1970s (see Segerstrale, 2000).

However, not all ethologists abandoned the instinct concept. Lorenz, in particular, continued to defend the division between nature and nurture. Rather than speaking of native and acquired behaviors, Lorenz later spoke of two different sources of information for behavior (innate/genetic vs. acquired/environmental), which was more a subtle shift in language than it was an actual change in theory, as Lehrman later pointed out.

Some ethologists followed Lorenz’s lead and continued to maintain more of a traditional delineation between instinct and learning. Their alternative synthesis viewed learning as instinctive (Gould & Marler, 1987). They proposed that animals have evolved domain-specific “instincts to learn” that result from the its genetic predispositions and innate knowledge. To support the idea of instincts for learning, ethologists pointed to traditional ethological findings (on imprinting and birdsong learning), but they also drew from the growing body of work in experimental psychology that seemed to indicate certain types of biological effects on learning.

Biological Constraints and Preparedness

While ethology was spreading in Europe in the 1930s–1950s, behaviorism reigned in the United States. Just as ethologists were confronted with including a greater role of nurture in their studies, behaviorists were challenged to consider a greater role of nature.

Behaviorists assumed there to be some behavioral innateness (e.g., fixed action patterns, unconditioned reflexes, primary reinforcers and drives). But because behaviorists focused on learning, they tended to study animals in laboratory settings using biologically (or ecologically) irrelevant stimuli and responses to minimize any role of instinct (Johnston, 1981). It was widely assumed that these studies would identify general laws of learning that applied to all species regardless of the specific cues, reinforcers, and responses involved.

Challenges to the generality assumption began to accumulate in the 1960s. Some studies pointed to failures that occurred during conditioning procedures. Breland and Breland (1961), for example, reported that some complex behaviors formed through operant conditioning would eventually become “displaced” by conditioned fixed action patterns in a phenomenon they called “instinctive drift.” Studies of taste-aversion learning (e.g., Garcia & Koelling, 1966) also reported the failure of rats to associate certain events (e.g., flavors with shock or audiovisual stimuli with toxicosis).

Other studies were pointing to enhanced learning. In particular, it was found that rats could form strong conditioned taste aversions after only a single pairing between a novel flavor and illness. (This rapid “one trial learning” was a major focus in the research from Niko Tinbergen’s ethological laboratory.) Animals, it seemed, had evolved innate predispositions to form (or not form) certain associations.

In humans, studies of biological constraints on learning were mostly limited to fear conditioning. Evidence indicated that humans conditioned differently to (biologically or evolutionarily) fear-relevant stimuli like pictures of spiders or snakes than to fear-irrelevant stimuli like pictures of mushrooms or flowers (Ohman, Fredrikson, Hugdahl, & Rimmö, 1976).

These findings and others were treated as a major problem in learning theory and led to calls for a new framework to study learning from a more biologically oriented perspective that integrated the evolutionary history and innate predispositions of the species. These predispositions were described as biological “constraints” on, “preparedness,” or “adaptive specializations” for learning, all of which were consistent with the “instincts to learn” framework proposed by ethologists.

By the 1980s it was becoming clear that the biological preparedness/constraint view of learning suffered some limitations. For example, what constraints count as “biological” was questioned. It was well established that there were general constraints on learning associated with the intensity, novelty, and timing of stimuli. But, arbitrarily it seemed, these constraints were not classified as “biological” (Domjan & Galef, 1983). Other studies of “biological constraints” found that 5- and 10-day old rats readily learned to associated a flavor with shock (unlike in adults), but (like in adults) such conditioning was not found in 15-day-old rats (Hoffman & Spear, 1988). In other words, the constraint on learning was not present in young rats but developed later in life, suggesting a possible role of experience in bringing about the adult-like pattern.

Attempts to synthesize these alternatives led to numerous calls for more ecologically oriented approaches to learning not unlike the synthesis between ethology and comparative psychology in the 1960s. All ecological approaches to learning proposed that learning should be studied in the context of “natural” (recurrent and species-typical) problems that animals encounter (and have evolved to encounter) using ecologically meaningful stimuli and responses. Some argued (e.g., Johnston, 1981) that studies of learning should take place within the larger context of studying how animals develop and adapt to their surround. Others (Domjan & Galef, 1983) pointed to more of a comparative approach in studying animal learning in line with behavioral ecology that takes into account how learning can be influenced by the possible selective pressures faced by each species. Still, how to synthesize biological constraints (and evolutionary explanations) on learning with a general process approach remains a source of tension in experimental psychology.

Nativism in Mind: Innate Ideas

Nativism and Empiricism in Philosophy

In the philosophy of mind, nature–nurture debates are voiced as debates between nativists and empiricists. Nativism is a philosophical position that holds that our minds have some innate (a priori to experience) knowledge, concepts, or structure at the very start of life. Empiricism, in contrast, holds that all knowledge derives from our experiences in the world.

However, rarely (if ever) were there pure nativist or empiricist positions, but the positions bespeak a persistent tension. Empiricists tended to eschew innateness and promote a view of the mental content that is built by general mechanisms (e.g., association) operating on sensory experiences, whereas nativists tend to promote a view of mind that contains domain-specific, innate processes and/or content (Simpson, Carruthers, Laurence, & Stich, 2005). Although the tension about mental innateness would loosen as empiricism gained prominence in philosophy and science, the strain never went away and would intensify again in the 20th century .

Nativism in 20th Century Psychology: The Case of Language Development

In the first half of the 20th century , psychologists generally assumed that knowledge was gained or constructed through experience with the world. This is not to say that psychologists did not assume some innate knowledge. The Swiss psychologist Jean Piaget, for example, believed infants enter the world with some innate knowledge structures, particularly as they relate to early sensory and motor functioning (see Piaget, 1971). But the bulk of his work dealt with the construction of conceptual knowledge as children adapt to their worlds. By and large, there were no research programs in psychology that sought to identify innate factors in human knowledge and cognition until the 1950s (Samet & Zaitchick, 2017)

An interest in psychological nativism was instigated in large part by Noam Chomsky’s (1959) critique of B. F. Skinner’s book on language. To explain the complexity of language, he argued, we must view language as the knowledge and application of grammatical rules. He went on to claim that the acquisition of these rules could not be attributed to any general-purpose, learning process (e.g., reinforcement). Indeed, language acquisition occurs despite very little explicit instruction. Moreover, language is special in terms of its complexity, ease, and speed of acquisition by children and in its uniqueness to humans. Instead, he claimed that our minds innately contain some language-specific knowledge that kick-starts and promotes language acquisition. He later claimed this knowledge can be considered some sort of specialized mental faculty or module he called the “language acquisition device” (Chomsky, 1965) or what Pinker (1995) later called the “language instinct.”

To support the idea of linguistic nativism, Chomsky and others appealed to the poverty of the stimulus argument. In short, this argument holds that our experiences in life are insufficient to explain our knowledge and abilities. When applied to language acquisition, this argument holds children’s knowledge of language (grammar) goes far beyond the limited, and sometimes broken, linguistic events that children directly encounter. Additional evidence for nativism drew upon the apparent maturational quality of language development. Despite wide variations in languages and child-rearing practices across the world, the major milestones in language development appear to unfold in children in a universal sequence and timeline, and some evidence suggested a critical period for language acquisition.

Nativist claims about language sparked intense rebuttals by empiricist-minded psychologists and philosophers. Some of these retorts tackled the logical limitations of the poverty of stimulus argument. Others pointed to the importance of learning and social interaction in driving language development, and still others showed that language (grammatical knowledge) may not be uniquely human (see Tomasello, 1995, for review). Nativists, in due course, provided their own rebuttals to these challenges, creating a persistent tension in psychology.

Extending Nativism Beyond Language Development

In the decades that followed, nativist arguments expanded beyond language to include cognitive domains that dealt with understanding the physical, psychological, and social worlds. Developmental psychologists were finding that infants appeared to be much more knowledgeable in cognitive tasks (e.g., on understanding object permanence) and skillful (e.g., in imitating others) than had previously been thought, and at much younger ages. Infants also showed a variety of perceptual biases (e.g., preference for face-like stimuli over equally complex non-face-like stimuli) from very early on. Following the standard poverty of the stimulus argument, these findings were taken as evidence that infants enter the world with some sort of primitive, innate, representational knowledge (or domain-specific neural mechanisms) that constrains and promotes subsequent cognitive development. The nature of this knowledge (e.g., as theories or as core knowledge), however, continues to be debated (Spelke & Kinzler, 2007).

Empiricist-minded developmental psychologists responded by demonstrating shortcomings in the research used to support nativist claims. For example, in studies of infants’ object knowledge, the behavior of infants (looking time) in nativist studies could be attributed to relatively simple perceptual processes rather than to the infants’ conceptual knowledge (Heyes, 2014). Likewise, reports of human neonatal imitation not only suffered from failures to replicate but could be explained by simpler mechanisms (e.g., arousal) than true imitation (Jones, 2017). Finally, studies of perceptual preferences found in young infants, like newborn preferences for face-like stimuli, may not be specific preferences for faces per se but instead may reflect simpler, nonspecific perceptual biases (e.g., preferences for top-heavy visual configurations and congruency; Simion & Di Giorgio, 2015).

Other arguments from empiricist-minded developmental psychologists focused on the larger rationale for inferring innateness. Even if it is conceded that young infants, like two-month-olds, or even two-day-olds, display signs of conceptual knowledge, there is no good evidence to presume the knowledge is innate. Their knowledgeable behaviors could still be seen as resulting from their experiences (many of which may be nonobvious to researchers) leading up to the age of testing (Spencer et al., 2009).

In the 21st century , there is still no consensus about the reality, extensiveness, or quality of mental innateness. If there is innate knowledge, can experience add new knowledge or only expand the initial knowledge? Can the doctrine of innate knowledge be falsified? There are no agreed-upon answers to these questions. The recurring arguments for and against mental nativism continue to confound developmental psychologists.

Maturation Theory

The emergence of bodily changes and basic behavioral skills sometimes occurs in an invariant, predictable, and orderly sequence in a species despite wide variations in rearing conditions. These observations are often attributed to the operation of an inferred, internally driven, maturational process. Indeed, 21st-century textbooks in psychology commonly associate “nature” with “maturation,” where maturation is defined as the predetermined unfolding of the individual from a biological or genetic blueprint. Environmental factors play a necessary, but fundamentally supportive, role in the unfolding of form.

Preformationism Versus Epigenesis in the Generation of Form

The embryological generation of bodily form was debated in antiquity but received renewed interest in the 17th century . Following Aristotle, some claimed that embryological development involved “epigenesis,” defined as the successive emergence of form from a formless state. Epigenesists, however, struggled to explain what orchestrated development without appealing to Aristotelean souls. Attempts were made to invoke to natural causes like physical and chemical forces, but, despite their best efforts, the epigenesists were forced to appeal to the power of presumed, quasi-mystical, vitalistic forces (entelechies) that directed development.

The primary alternative to epigenesis was “preformationism,” which held that development involved the growth of pre-existing form from a tiny miniature (homunculus) that formed immediately after conception or was preformed in the egg or sperm. Although it seems reasonable to guess that the invention and widespread use of the microscope would immediately lay to rest any claim of homuncular preformationism, this was not the case. To the contrary, some early microscopists claimed to see signs of miniature organisms in sperm or eggs, and failures to find these miniatures were explained away (e.g., the homunculus was transparent or deflated to the point of being unrecognizable). But as microscopes improved and more detailed observations of embryological development were reported in the late 18th and 19th centuries , homuncular preformationism was finally refuted.

From Preformationism to Predeterminism

Despite the rejection of homuncular preformationism, preformationist appeals can be found throughout the 19th century . One of the most popular preformationist theories of embryological development was put forth by Ernst Haeckel in the 1860s (Gottlieb, 1992). He promoted a recapitulation theory (not original to Haeckel) that maintained that the development of the individual embryo passes through all the ancestral forms of its species. Ontogeny was thought to be a rapid, condensed replay of phylogeny. Indeed, for Haeckel, phylogenesis was the mechanical cause of ontogenesis. The phylogenetic evolution of the species created the maturational unfolding of embryonic form. Exactly how this unfolding takes place was less important than its phylogenetic basis.

Most embryologists were not impressed with recapitulation theory. After all, the great embryologist Karl Ernst von Baer ( 1792–1876 ) had refuted strict recapitulation decades earlier. Instead, there was greater interest in how best to explain the mechanical causes of development ushering in a new “experimental embryology.” Many experimental embryologists followed the earlier epigenesists by discussing vitalistic forces operating on the unorganized zygote. But it soon became clear that the zygote was structured, and many people believed the zygote contained special (unknown) substances that specified development. Epigenesis-minded experimental embryologists soon warned that the old homuncular preformationism was being transformed into a new predetermined preformationism.

As a result, the debates between preformationism and epigenesis were reignited in experimental embryology, but the focus of these debates shifted to the various roles of nature and nurture during development. More specifically, research focused on the extent to which early cellular differentiation was predetermined by factors internal to cells like chromosomes or cytoplasm (preformationism, nature) or involved factors (e.g., location) outside of the cell (epigenesis, nurture). The former emphasized reductionism and developmental programming, whereas the latter emphasized some sort of holistic, regulatory system responsive to internal and external conditions. The tension between viewing development as predetermined or “epigenetic” persists into the 21st century .

Preformationism gained momentum in the 20th century following the rediscovery of Mendel’s studies of heredity and the rapid rise of genetics, but not because of embryological research on the causes of early differentiation. Instead, preformationism prevailed because it seemed embryological research on the mechanisms of development could be ignored in studies of hereditary patterns.

The initial split between heredity and development can be found in Galton’s speculations but is usually attributed to Weismann’s germ-plasm theory. Weismann’s barrier seemed to posit that the germinal determinants present at conception would be the same, unaltered determinants transmitted during reproduction. This position, later dubbed as “Weismannism,” was ironically not one promoted by Weismann. Like nearly all theorists in the 19th century , he viewed the origins of variation and heredity as developmental phenomena (Amundson, 2005), and he claimed that the germ-plasm could be directly modified in the lifetime of the organism by environmental (e.g., climactic and dietary) conditions (Winther, 2001). Still, Weismann’s theory treated development as a largely predetermined affair driven by inherited, germinal determinants buffered from most developmental events. As such, it helped set the stage for a more formal divorce between heredity and development with the rise of Mendelism in the early 20th century .

Mendel’s theory of heredity was exceptional in how it split development from heredity (Amundson, 2005). More so than in Weismann’s theory, Mendel’s theory assumed that the internal factors that determine form and are transmitted across generations remain unaltered in the lifetime of the organism. To predict offspring outcomes, one need only know the combination of internal factors present at conception and their dominance relations. Exactly how these internal factors determined form could be disregarded. The laws of hereditary transmission of the internal factors (e.g., segregation) did not depend on the development or experiences of the organism or the experiences the organism’s ancestors. Thus the experimental study of heredity (i.e., breeding) could proceed without reference to ancestral records or embryological concerns (Amundson, 2000). By the mid-1920s, the Mendelian factors (now commonly called “genes”) were found to be structurally arranged on chromosomes, and the empirical study of heredity (transmission genetics) was officially divorced from studies of development.

The splitting of heredity and development found in Mendel’s and Weismann’s work met with much resistance. Neo-Lamarckian scientists, especially in the United States (Cook, 1999) and France (Loison, 2011), sought unsuccessfully to experimentally demonstrate the inheritance of acquired characteristics into the 1930s.

In Germany during the 1920s and 1930s, resistance to Mendelism dealt with the chromosomal view of Mendelian heredity championed by American geneticists who were narrowly focused on studying transmission genetics at the expense of developmental genetics. German biologists, in contrast, were much more interested in the broader roles of genes in development (and evolution). In trying to understand how genes influence development, particularly of traits of interest to embryologists, they found the Mendelian theory to be lacking. In the decades between the world wars, German biologists proposed various expanded views of heredity that included some form of cytoplasmic inheritance (Harwood, 1985).

Embryologists resisted the preformationist view of development throughout the early to mid- 20th century , often maintaining no divide between heredity and development, but their objections were overshadowed by genetics and its eventual synthesis with evolutionary theory. Consequently, embryological development was treated by geneticists and evolutionary biologists as a predetermined, maturational process driven by internal, “genetic” factors buffered from environmental influence.

Maturation Theory in Psychology

Maturation theory was applied to behavioral development in the 19th century in the application of Haeckel’s recapitulation theory. Some psychologists believed that the mental growth of children recapitulated the history of the human race (from savage brute to civilized human). With this in mind, many people began to more carefully document child development. Recapitulationist notions were found in the ideas of many notable psychologists in the 19th and early 20th centuries (e.g., G. S. Hall), and, as such, the concept played an important role in the origins of developmental psychology (Koops, 2015). But for present purposes what is most important is that children’s mental and behavioral development was thought to unfold via a predetermined, maturational process.

With the growth of genetics, maturational explanations were increasingly invoked to explain nearly all native and hereditary traits. As the instinct concept lost value in the 1920s, maturation theory gained currency, although the shift was largely a matter of semantics. For many psychologists, the language simply shifted from “instinct versus learning” to “maturation versus practice/experience” (Witty & Lehman, 1933).

Initial lines of evidence for maturational explanations of behavior were often the same as those that justified instinct and native traits, but new embryological research presented in the mid-1920s converged to show support for strict maturational explanations of behavioral development. In these experiments (see Wyman, 2005, for review), spanning multiple laboratories, amphibians (salamanders and frogs) were exposed to drugs that acted as anesthetics and/or paralytics throughout the early stages of development, thus reducing sensory experience and/or motor practice. Despite the reduced sensory experiences and being unable to move, these animals showed no delays in the onset of motor development once the drugs wore off.

This maturational account of motor development in amphibians fit well with contemporaneous studies of motor development in humans. The orderly, invariant, and predictable (age-related) sequential appearance of motor skills documented in infants reared under different circumstances (in different countries and across different decades) was seen as strong evidence for a maturational account. Additional evidence was reported by Arnold Gessell and Myrtle McGraw, who independently presented evidence in the 1920s to show that the pace and sequence of motor development in infancy were not altered by special training experiences. Although the theories of these maturation theorists were more sophisticated when applied to cognitive development, their work promoted a view in which development was primarily driven by neural maturation rather than experience (Thelen, 2000).

Critical and Sensitive Periods

As the maturation account of behavioral development gained ground, it became clear that environmental input played a more informative role than had previously been thought. Environmental factors were found to either disrupt or induce maturational changes at specific times during development. Embryological research suggested that there were well-delineated time periods of heightened sensitivity in which specific experimental manipulations (e.g., tissue transplantations) could induce irreversible developmental changes, but the same manipulation would have no effect outside of that critical period.

In the 1950s–1960s a flurry of critical period effects were reported in birds and mammals across a range of behaviors including imprinting, attachment, socialization, sensory development, bird song learning, and language development (Michel & Tyler, 2005). Even though these findings highlighted an important role of experience in behavioral development, evidence of critical periods was usually taken to imply some rigid form of biological determinism (Oyama, 1979).

As additional studies were conducted on critical period effects, it became clear that many of the reported effects were more gradual, variable, experience-dependent, and not necessarily as reversible as was previously assumed. In light of these reports, there was a push in the 1970s (e.g., Connolly, 1972) to substitute “sensitive period” for “critical period” to avoid the predeterminist connotations associated with the latter and to better appreciate that these periods simply describe (not explain) certain temporal aspects of behavioral development. As a result, a consensus emerged that behaviors should not be attributed to “time” or “age” but to the developmental history and status of the animal under investigation (Michel & Tyler, 2005).

Heredity and Genetics

In the decades leading up to and following the start of the 20th century , it was widely assumed that many psychological traits (not just instincts) were inherited or “due to heredity,” although the underlying mechanisms were unknown. Differences in intelligence, personality, and criminality within and between races and sexes were largely assumed to be hereditary and unalterable by environmental intervention (Gould, 1996). The evidence to support these views in humans was often derived from statistical analyses of how various traits tended to run in families. But all too frequently, explanations of data were clouded by pre-existing, hereditarian assumptions.

Human Behavioral Genetics

The statistical study of inherited human (physical, mental, and behavioral) differences was pioneered by Galton (1869). Although at times Galton wrote that nature and nurture were so intertwined as to be inseparable, he nevertheless devised statistical methods to separate their effects. In the 1860s and 1870s, Galton published reports purporting to show how similarities in intellect (genius, talent, character, and eminence) in European lineages appeared to be a function of degree of relatedness. Galton considered, but dismissed, environmental explanations of his data, leading him to confirm his belief that nature was stronger than nurture.

Galton also introduced the use of twin studies to tease apart the relative impact of nature versus nurture, but the twin method he used was markedly different from later twin studies used by behavioral geneticists. Galton tracked the life history of twins who were judged to be very similar or very dissimilar near birth (i.e., by nature) to test the power of various postnatal environments (nurture) that might make them more or less similar over time. Here again, Galton concluded that nature overpowers nurture.

Similar pedigree (e.g., the Kallikak study; see Zenderland, 2001) and twin studies appeared in the early 1900s, but the first adoption study and the modern twin method (which compares monozygotic to dizygotic twin pairs) did not appear until the 1920s (Rende, Plomin, & Vandenberg, 1990). These reports led to a flurry of additional work on the inheritance of mental and behavioral traits over the next decade.

Behavioral genetic research peaked in the 1930s but rapidly lost prominence due in large part to its association with the eugenics movement (spearheaded by Galton) but also because of the rise and eventual hegemony of behaviorism and the social sciences in the United States. Behavioral genetics resurged in the 1960s with the rising tide of nativism in psychology, and returned to its 1930s-level prominence in the 1970s (McGue & Gottesman, 2015).

The resurgence brought with a new statistical tool: the heritability statistic. The origins of heritability trace back to early attempts to synthesize Mendelian genetics with biometrics by Ronald Fisher and others. This synthesis ushered in a new field of quantitative genetics and it marked a new way of thinking about nature and nurture. The shift was to no longer think about nature and nurture as causes of traits in individuals but as causes of variation in traits between populations of individuals. Eventually, heritability came to refer to the amount of variance in a population sample that could be statistically attributed to genetic variation in that sample. Kinship (especially twin) studies provided seemingly straightforward ways of partitioning variation in population trait attributes into genetic versus environmental sources.

Into the early 21st century , hundreds of behavioral genetic studies of personality, intelligence, and psychopathology were reported. With rare exceptions, these studies converge to argue for a pervasive influence of genetics on human psychological variation.

These studies have also fueled much controversy. Citing in part behavioral genetic research, the educational psychologist Arthur Jensen (1969) claimed that the differences in intelligence and educational achievement in the United States between black and white students appeared to have a strong genetic basis. He went on to assume that because these racial differences appeared hereditary, they were likely impervious to environmental (educational) intervention. His article fanned the embers of past eugenics practices and ignited fiery responses (e.g., Hirsch, 1975). The ensuing debates not only spawned a rethinking of intelligence and how to measure it, but they ushered in a more critical look at the methods and assumptions of behavioral genetics.

Challenges to Behavioral Genetics

Many of the early critiques of behavioral genetics centered on interpreting the heritability statistic commonly calculated in kinship (family, twin, and adoption) studies. Perhaps more so than any other statistic, heritability has been persistently misinterpreted by academics and laypersons alike (Lerner, 2002). Contrary to popular belief, heritability tells us nothing about the relative impact of genetic and environmental factors on the development of traits in individuals. It deals with accounting for trait variation between people, not the causes of traits within people. As a result, a high heritability does not indicate anything about the fixity of traits or their imperviousness to environmental influence (contra Jensen), and a low heritability does not indicate an absence of genetic influence on trait development. Worse still, heritability does not even indicate anything about the role of genetics in generating the differences between people.

Other challenges to heritability focused not on its interpretation but on its underlying computational assumptions. Most notably, heritability analyses assume that genetic and environmental contributions to trait differences are independent and additive. The interaction between genetic and environmental factors were dismissed a priori in these analyses. Studies of development, however, show that no factor (genes, hormones, parenting, schooling) operates independently, making it impossible to quantify how much of a given trait in a person is due to any causal factor. Thus heritability analyses are bound to be misleading because they are based on biologically implausible and logically indefensible assumptions about development (Gottlieb, 2003).

Aside from heritability, kinship studies have been criticized for not being able to disentangle genetic and environmental effects on variation. It had long been known that that in family (pedigree) studies, environmental and genetic factors are confounded. Twin and adoption studies seemed to provide unique opportunities to statistically disentangle these effects, but these studies are also deeply problematic in assumptions and methodology. There are numerous plausible environmental reasons for why monozygotic twin pairs could resemble each other more than dizygotic twin pairs or why adoptive children might more closely resemble their biological than their adoptive parents (Joseph & Ratner, 2013).

A more recent challenge to behavioral genetics came from an unlikely source. Advances in genomic scanning in the 21st century made it possible in a single study to correlate thousands of genetic polymorphisms with variation in the psychological profiles (e.g., intelligence, memory, temperament, psychopathology) of thousands of people. These “genome-wide association” studies seemed to have the power and precision to finally identify genetic contributions to heritability at the level of single nucleotides. Yet, these studies consistently found only very small effects.

The failure to find large effects came to be known as the “missing heritability” problem (Maher, 2008). To account for the missing heritability, some behavioral geneticists and molecular biologists asserted that important genetic polymorphisms remain unknown, they may be too rare to detect, and/or that current studies are just not well equipped to handle gene–gene interactions. These studies were also insensitive to epigenetic profiles (see the section on Behavioral Epigenetics), which deal with differences in gene expression. Even when people share genes, they may differ in whether those genes get expressed in their lifetimes.

But genome-wide association studies faced an even more problematic issue: Many of these studies failed to replicate (Lickliter & Honeycutt, 2015). For those who viewed heritability analyses as biologically implausible, the small effect sizes and failures to replicate in genome-wide association studies were not that surprising. The search for independent genetic effects was bound to fail, because genes simply do not operate independently during development.

Behavioral Epigenetics

Epigenetics was a term coined in the 1940s by the developmental biologist Conrad Waddington to refer to a new field of study that would examine how genetic factors interact with local environmental conditions to bring about the embryological development of traits. By the end of the 20th century , epigenetics came to refer to the study of how nongenetic, molecular mechanisms physically regulate gene expression patterns in cells and across cell lineages. The most-studied mechanisms involve organic compounds (e.g., methyl-groups) that physically bind to DNA or the surrounding proteins that package DNA. The addition or removal of these compounds can activate or silence gene transcription. Different cell types have different, stable epigenetic markings, and these markings are recreated during cell division so that cells so marked give rise to similar types of cells. Epigenetic changes were known to occur during developmental periods of cellular differentiation (e.g., during embryogenesis), but not until 2004 was it discovered that these changes can occur at other periods in the life, including after birth (Roth, 2013)

Of interest to psychologists were reports that different behavioral and physiological profiles (e.g., stress reactivity) of animals were associated with different epigenetic patterns in the nervous system (Moore, 2015). Furthermore, these different epigenetic patterns could be established or modified by environmental factors (e.g., caregiving practices, training regimes, or environmental enrichment), and, under certain conditions, they remain stable over long periods of time (from infancy to adulthood).

Because epigenetic research investigates the physical interface between genes and environment, it represents an exciting advance in understanding the interaction of nature and nurture. Despite some warnings that the excitement over behavioral epigenetic research may be premature (e.g., Miller, 2010), for many psychologists, epigenetics underscores how development involves both nature and nurture.

For others, what is equally exciting is the additional evidence epigenetics provides to show that the genome is an interactive and regulated system. Once viewed as the static director of development buffered from environment influence, the genome is better described as a developing resource of the cell (Moore, 2015). More broadly, epigenetics also points to how development is not a genetically (or biologically) predetermined affair. Instead, epigenetics provides additional evidence that development is a probabilistic process, contingent upon factors internal and external to the organism. In this sense, epigenetics is well positioned to help dissolve the nature–nurture dichotomy.

Beyond Nature–Nurture

In the final decades of the 20th century , a position was articulated to move beyond the dichotomous nature–nurture framework. The middle-ground position on nature–nurture did not seem up to the task of explaining the origins of form, and it brought about more confusion than clarity. The back-and-forth (or balanced) pendulum between nature- and nurture-based positions throughout history had only gone in circles. Moving forward would require moving beyond such dichotomous thinking (Johnston, 1987).

The anti-dichotomy position, referred to as the Developmentalist tradition, was expressed in a variety of systems-based, metatheoretical approaches to studying development, all of which extended the arguments against nature–nurture expressed earlier by Kuo and Lehrman. The central problem with all nativist claims according to Developmentalists is a reliance on preformationism (or predeterminism).

The problem with preformationism, they argue, besides issues of evidence, is that it is an anti-developmental mindset. It presumes the existence of the very thing(s) one wishes to explain and, consequently, discourages developmental analyses. To claim that some knowledge is innate effectively shuts down research on the developmental origins of that knowledge. After all, why look for the origins of conceptual knowledge if that knowledge is there all along? Or why search for any experiential contributions to innate behaviors if those behaviors by definition develop independently of experience? In the words of Developmentalists Thelen and Adolph (1992), nativism “leads to a static science, with no principles for understanding change or for confronting the ultimate challenge of development, the source of new forms in structure and function” (p. 378).

A commitment to maturational theory is likely one of the reasons why studies of motor development remained relatively dormant for decades following its heyday in the 1930–1940s (Thelen, 2000). Likewise, a commitment to maturational theory also helps explain the delay in neuroscience to examine how the brain physically changes in response to environmental conditions, a line of inquiry that only began in the 1960s.

In addition to the theoretical pitfalls of nativism, Developmentalists point to numerous studies that show how some seemingly native behaviors and innate constraints on learning are driven by the experiences of animals. For example, the comparative psychologist Gilbert Gottlieb (1971) showed that newly hatched ducklings display a naïve preference for a duck maternal call over a (similarly novel) chicken maternal call (Gottlieb, 1971), even when duck embryos were repeatedly exposed to the chicken call prior to hatching (Gottlieb, 1991). It would be easy to conclude that ducklings have an innate preference to approach their own species call and that they are biologically constrained (contraprepared) in learning a chicken call. However, Gottlieb found that the naïve preference for the duck call stemmed from exposure to the duck embryos’ own (or other) vocalizations in the days before hatching (Gottlieb, 1971). Exposure to these vocalizations not only made duck maternal calls more attractive, but it hindered the establishment of a preference for heterospecific calls. When duck embryos were reared in the absence of the embryonic vocalizations (by devocalizing embryos in ovo) and exposed instead to chicken maternal calls, the newly hatched ducklings preferred chicken over duck calls (Gottlieb, 1991). These studies clearly showed how seemingly innate, biologically based preferences and constraints on learning derived from prenatal sensory experiences.

For Developmentalists, findings like these suggest that nativist explanations of any given behavior are statements of ignorance about how that behavior actually develops. As Kuo and Lehrman made clear, nativist terms are labels, not explanations. Although such appeals are couched in respectable, scientific language (e.g., “X is due to maturation, genes, or heredity”), they argue it would be more accurate simply to say that “We don’t know what causes X” or that “X is not due to A, B, or C.” Indeed, for Developmentalists, the more we unpack the complex dynamics about how traits develop, the less likely we are to use labels like nature or nurture (Blumberg, 2005).

On the other hand, Developmentalists recognize that labeling a behavior as “learned” also falls short as an explanatory construct. The empiricist position that knowledge or behavior is learned does not adequately take into account that what is learned and how easily something is learned depends on (a) the physiological and developmental status of the person, (b) the nature of the surrounding physical and social context in which learning takes place, and the (c) experiential history of the person. The empiricist tendency to say “X is learned or acquired through experience” can also short-circuit developmental analyses in the same way as nativist claims.

Still, Developmentalists appreciate that classifying behaviors can be useful. For example, the development of some behaviors may be more robust, reliably emerging across a range of environments and/or remaining relatively resistant to change, whereas others are more context-specific and malleable. Some preferences for stimuli require direct experience with those stimuli. Other preferences require less obvious (indirect) types of experiences. Likewise, it can still be useful to describe some behaviors in the ways shown in Table 1. Developmentalists simply urge psychologists to resist the temptation to treat these behavioral classifications as implying different kinds of explanations (Johnston, 1987).

Rather than treat nature and nurture as separate developmental sources of causation (see Figure 1), Developmentalists argue that a more productive way of thinking about nature–nurture is to reframe the division as that between product and process (Lickliter & Honeycutt, 2015). The phenotype or structure (one’s genetic, epigenetic, anatomical, physiological, behavioral, and mental profile) of an individual at any given time can be considered one’s “nature.” “Nurture” then refers to the set of processes that generate, maintain, and transform one’s nature (Figure 2). These processes involve the dynamic interplay between phenotypes and environments.

Figure 2. The developmentalist alternative view of nature–nurture as product–process. Developmentalists view nature and nurture not as separate sources of causation in development (see Figure 1) but as a distinction between process (nurture) and product (nature).

Conclusion

It is hard to imagine any set of findings that will end debates about the roles of nature and nurture in human development. Why? First, more so than other assumptions about human development, the nature–nurture dichotomy is deeply entrenched in popular culture and the life sciences. Second, throughout history, the differing positions on nature and nurture were often driven by other ideological, philosophical, and sociopolitical commitments. Thus the essential source of tension in debates about nature–nurture is not as much about research agendas or evidence as about basic differences in metatheoretical positions (epistemological and ontological assumptions) about human behavior and development (Overton, 2006).

References

Amundson, R. (2000). Embryology and evolution 1920–1960: Worlds apart? History and Philosophy of the Life Sciences, 22, 335–352.

Amundson, R. (2005). The changing role of the embryo in evolutionary thought: Roots of evo-devo. New York, NY: Cambridge University Press.

Blumberg, M. S. (2005). Basic instinct: The genesis of novel behavior. New York, NY: Thunder’s Mouth Press.

Breland, K. , & Breland, M. (1961). The misbehavior of organisms. American Psychologist, 16, 681–684.

Burkhardt, R. (2005). Patterns of behavior: Konrad Lorenz, Niko Tinbergen and the founding of ethology. Chicago, IL: University of Chicago Press.

Burnham, J. C. (1972). Instinct theory and the German reaction to Weismannism. Journal of the History of Biology, 5, 321–326.

Carmichael, L. (1925). Heredity and environment: Are they antithetical? The Journal of Abnormal and Social Psychology, 20(3), 245–260.

Chomsky, N. (1959). A review of B. F. Skinner’s verbal behavior. Language, 35, 26–57. Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.

Connolly, K. (1972). Learning and the concept of critical periods in infancy. Developmental Medicine & Child Neurology, 14(6), 705–714.

Cook, G. M. (1999). Neo-Lamarckian experimentalism in America: Origins and consequences. Quarterly Review of Biology, 74, 417–437.

Cravens, H. , & Burnham, J. C. (1971). Psychology and evolutionary naturalism in American thought, 1890–1940. American Quarterly, 23, 635–657.

Diamond, S. (1971). Gestation of the instinct concept. History of the Behavioral Sciences, 7(4), 323–336.

Domjan, M. , & Galef, B. G. (1983). Biological constraints on instrumental and classical conditioning: Retrospect and prospect. Animal Learning & Behavior, 11(2), 151–161.

Dunlap, K. (1919). Are there any instincts? Journal of Abnormal Psychology, 14, 307–311. Galton, F. (1869). Hereditary genius. London, U.K.: Macmillan.

Garcia, J. , & Koelling, R. A. (1966). Relation of cue to consequence in avoidance learning. Psychonomics, 4(1), 123–124.

Gottlieb, G. (1971). Development of species identification in birds. Chicago, IL: University of Chicago Press.

Gottlieb, G. (1991). Experiential canalization of behavioral development: Results. Developmental Psychology, 27(1), 35–39.

Gottlieb, G. (1992). Individual development and evolution: The genesis of novel behavior. New York, NY: Oxford University Press.

Gottlieb, G. (2003). On making behavioral genetics truly developmental. Human Development, 46, 337–355.

Gould, J. L. , & Marler, P. (1987). Learning by instinct. Scientific American, 256(1), 74–85. Gould, S. J. (1996). The mismeasure of man (2nd ed.). New York, NY: Norton.

Gray, P. H. (1967). Spalding and his influence on research in developmental behavior. Journal of the History of the Behavioral Sciences, 3, 168–179.

Griffiths, P. E. (2008). Ethology, sociobiology, and evolutionary psychology. In S. Sarkar & A. Plutnsky (Eds.), A companion to the philosophy of biology (pp. 393–414). New York, NY: Blackwell.

Harwood, J. (1985). Geneticists and the evolutionary synthesis in interwar Germany. Annals of Science, 42, 279–301.

Herrnstein, R. J. (1972). Nature as nurture: Behaviorism and the instinct doctrine. Behaviorism, 1(1), 23–52.

Heyes, C. (2014). False belief in infancy: A fresh look. Developmental Science, 17(5), 647–659.

Hirsch, J. (1975). Jensenism: The bankruptcy of “science” without scholarship. Educational Theory, 25, 3–27.

Hoffman, H. , & Spear, N. E. (1988). Ontogenetic differences in conditioning of an aversion to a gustatory CS with a peripheral US. Behavioral and Neural Biology, 50, 16–23.

Honeycutt, H. (2011). The “enduring mission” of Zing-Yang Kuo to eliminate the nature–nurture dichotomy in psychology. Developmental Psychobiology, 53(4), 331–342.

Jaynes, J. , & Woodward, W. (1974). In the shadow of the enlightenment. II. Reimarus and his theory of drives. Journal of the History of the Behavioral Sciences, 10, 144–159.

Jensen, A. (1969). How much can we boost IQ and scholastic achievement. Harvard Educational Review, 39, 1–123.

Johnston, T. (1981). Contrasting approaches to a theory of learning. Behavioral and Brain Sciences, 4, 125–173.

Johnston, T. (1987). The persistence of dichotomies in the study of behavior. Developmental Review, 7, 149–172.

Johnston, T. (1995). The influence of Weismann’s germ-plasm theory on the distinction between learned and innate behavior. Journal of the History of the Behavioral Sciences, 31, 115–128.

Jones, S. (2017). Can newborn infants imitate? Wiley Interdisciplinary Reviews: Cognitive Science, 8, e1410.

Joseph, J. , & Ratner, C. (2013). The fruitless search for genes in psychiatry and psychology: Time to reexamine a paradigm. In S. Krimsky & J. Gruber (Eds.), Genetic explanations: Sense and nonsense (pp. 94–106). Cambridge, MA: Harvard University Press.

Keller, E. F. (2010). The mirage of space between nature and nurture. Durham, NC: Duke University Press.

Kuo, Z. Y. (1930). The genesis of the cat’s response to the rat. Journal of Comparative Psychology, 11, 1–36.

Lehrman, D. S. (1953). A critique of Konrad Lorenz’s theory of instinctive behavior. Quarterly Review of Biology, 28, 337–363.

Lerner, R. (2002). Concepts and theories of human development (3rd ed.). Mahwah, NJ: Erlbaum.

Lickliter, R. , & Honeycutt, H. (2015). Biology, development and human systems. In W. Overton & P. C. M. Molenaar (Eds.), Handbook of child psychology and developmental science. Vol. 1: Theory and method (7th ed., pp. 162–207). Hoboken, NJ: Wiley.

Logan, C. A. , & Johnston, T. D. (2007). Synthesis and separation in the history of “nature” and “nurture.” Developmental Psychobiology, 49(8), 758–769.

Maher, B. (2008). Personal genomes: The case of the missing heritability. Nature, 456, 18–21.

McGue, M. , & Gottesman, I. I. (2015). Behavior genetics. In R. L. Cautin & S. O. Lilienfeld (Eds.), The encyclopedia of clinical psychology (Vol. 1). Chichester, U.K.: Wiley Blackwell.

Miller, G. (2010).The seductive allure of behavioral epigenetics. Science, 329(5987), 24–27.

Moore, D. S. (2015). The developing genome. An introduction to behavioral epigenetics. New York, NY: Oxford University Press

Morgan, C. L. (1896). Habit and instinct. New York, NY: Edward Arnold.

Muller-Wille, S. , & Rheinberger, H.-J. (2012). A cultural history of heredity. Chicago, IL: University of Chicago Press.

Ohman, A. , Fredrikson, M. , Hugdahl, K. , & Rimmö, P.A. (1976). The premise of equipotentiality in human classical conditioning: Conditioned electrodermal responses to potentially phobic stimuli. Journal of Experimental Psychology: General, 105(4), 313–337.

Overton, W. F. (2006). Developmental psychology: Philosophy, concepts, methodology. In R. Lerner (Ed.), Handbook of child psychology: Vol. 1. Theoretical models of human development (pp. 18–88). New York, NY: Wiley.

Oyama, S. (1979). The concept of the sensitive period in developmental studies. Merrill-Palmer Quarterly, 25(2), 83–103.

Piaget, J. (1971). Biology and knowledge: An essay on the relation between organic regulations and cognitive processes. Chicago, IL: University of Chicago Press.

Pinker, S. (1995). The language instinct: How the mind creates language. London, U.K.: Penguin.

Rende, R. D. , Plomin, R. , & Vandenberg, S. G. (1990). Who discovered the twin method? Behavioral Genetics, 20(2), 277–285.

Richards, R. J. (1987). Darwin and the emergence of evolutionary theories of mind and behavior. Chicago, IL: University of Chicago Press.

Robinson, G. E. , & Barron, A. B. (2017). Epigenetics and the evolution of instincts. Science, 356(6333), 26–27.

Samet, J. , & Zaitchick, D. (2017). Innateness and contemporary theories of cognition. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. Stanford, CA: Stanford University.

Segerstrale, U. (2000). Defenders of the truth: The battle for science in the sociobiology debate and beyond. New York, NY: Oxford University Press.

Simpson, T. , Carruthers, P. , Laurence, S. , & Stich, S. (2005). Introduction: Nativism past and present. In P. Carruthers , S. Laurence , & S. Stich (Eds.), The innate mind: Structure and contents (pp. 3–19). New York, NY: Oxford University Press.

Spelke, E. , & Kinzler, K. D. (2007). Core knowledge. Developmental Science, 10(1), 89–96.

Spencer, J. P. , Samuelson, L. K. , Blumberg, M. S. , McMurray, R. , Robinson, S. R. , & Tomblin, J. B. (2009). Seeing the world through a third eye: Developmental systems theory looks beyond the nativist-empiricist debate. Child Development Perspectives, 3, 103–105.

Thelen, E. (2000). Motor development as foundation and future of developmental psychology. International Journal of Behavioral Development, 24(4), 385–397.

Thelen, E. , & Adolph, K. E. (1992). Arnold L. Gesell: The paradox of nature and nurture. Developmental Psychology, 28(3), 368–380.

Tinbergen, N. (1963). On the aims and methods of ethology. Zeitschrift für Tierpsychologie, 20, 410–433.

Tomasello, M. T. (1995). Language is not an instinct. Cognitive Development, 10, 131–156.

Winther, R. G. (2001). Weismann on germ-plasm variation. Journal of the History of Biology, 34, 517–555.

Witty, P. A. , & Lehman, H. C. (1933). The instinct hypothesis versus the maturation hypothesis. Psychological Review, 40(1), 33–59.

Wyman, R. J. (2005). Experimental analysis of nature–nurture. Journal of Experimental Zoology, 303, 415–421.

Zenderland, L. (2001). Measuring minds. Henry Herbert Goddard and the origins of American intelligence testing. New York, NY: Cambridge University Press.