In his 1932 lecture, “The Bearing of Genetics on Theories of Evolution,” R. A. Fisher compared the fissures between different scientific techniques to God’s confounding of languages in the Biblical legend of the Tower of Babel. If the fissures in scientific method were assumed to hold the construction of an “edifice” of scientific knowledge back, much as the division of language prevented the construction of the Tower of Babel, then the obvious question was how method could be reunited. According to Fisher,
If we were to ask … what universal language could enable men of science to understand each other sufficiently well for effective co-operation, I submit that there can be only one answer. If we could select a group of men of science, completely purge their minds of all knowledge of language, and allow them time to develop the means of conveying to one another their scientific ideas, I have no doubt whatever that the only successful medium they could devise would be that ancient system of logic and deductive reasoning first perfected by the Greeks, and which we know as Mathematics.
As we saw in Part 1, the bulk of Fisher’s statistical theorization was dedicated to the problem of inductive reasoning, that is, the development of defined conclusions from well-structured observations. But it is clear that Fisher also valued deductive uses of mathematics, because it permitted different observational conclusions to be related to each other through a fully coherent language. It is just not clear what he understood the epistemological status or function of deductive knowledge to be.
For Fisher, genetics provided a clear logic of the process of inheritance, which could guide experimentation on the inheritance of traits in plants and livestock. However, as he had shown in The Genetical Theory of Natural Selection (1930), mathematical reasoning could also be used to demonstrate that gradual processes of evolution were a logical consequence of the principles of Mendelian inheritance and of natural selection operating together.
However, natural selection operated on natural populations, and, therefore, could not be subjected to the same sorts of experimental tests as Mendelian inheritance. This raises the question of what deductive reasoning could and could not say about natural populations, and how that might relate to empirical observations made of those populations.
It turns out there is a decent amount of scholarship on this subject. Fisher’s approach to genetic theory was notoriously abstract. He himself compared it to the kinetic theory of gases in physics. Whether this approach is ultimately valid is a question that has engendered debate from American geneticist Sewall Wright’s (1889-1988) criticisms of Fisher to the work of present-day biologists and philosophers of biology.* We turn now to the work of the philosophers.
University of Bristol postdoc David Crawford has recently suggested that Fisher misappropriated ideas from Ludwig Boltzmann’s (1844-1906) statistical mechanics. However, other recent studies of the long-running “beanbag genetics” controversy (originating with the Wright-Fisher dispute and continuing with criticisms by Ernst Mayr and others) have tended to worry less about the physics link, viewing it as a limited metaphor, and have instead emphasized a history of conceptual confusion concerning the content and intent of Fisher’s argument. Willem De Winter’s “The Beanbag Genetics Controversy: Towards a Synthesis of Opposing Views of Natural Selection,” Biology and Philosophy 12 (1997): 149-184, outlines the history of the controversy, and defends Fisher’s theory on the grounds that critics have read his understanding of the action of genes overly narrowly, and that, like any theory, it is necessarily limited to a certain range of validity and application.
Probably the most pertinent article in this genre for our purposes is Anya Plutynski’s “What was Fisher’s fundamental theorem of natural selection and what was it for?” Studies in History and Philosophy of Biology and Biomedical Sciences 37 (2006): 59-82. Similarly to De Winter, she argues that Fisher’s deductions need to be understood in terms of their epistemic status, which she elucidates. She quotes from a letter Fisher wrote to biologist Julian Huxley (1887-1975) in 1930, which hints at his views of that status, and is (in a slightly extended version from her excerpt) nicely evocative:
The importance which you and [J. B. S.] Haldane attach to [The Genetical Theory of Natural Selection] — and there are no two opinions in this country to which I would attach more weight — gives me much pleasure, but not a little embarrassment, for if I had so large an aim as to write an important book on Evolution, I should have had to attempt an account of very much work about which I am not really qualified to give a useful opinion. As it is there is surprisingly little in the whole book that would not stand if the world had been created in 4004 B.C., and my primary job is to try to give an account of what Natural Selection must be doing, even if it had never done anything of much account until now.
An important question is what Fisher meant with his underlined “must”. His book (as he also urged in his preface) was about natural selection abstracted from evolution. Plutynski clarifies that his specific aim “to bring the kind of generality and rigor he found in statistical thermodynamics to the biological sciences, was … importantly constrained by the contingency of the biological world.” His abstract theorem was an “exact statement, even if only of what must be the case for evolution to go forward—a condition of adequacy, if not a universal law” (my emphasis). Thus, she notes:
Fisher’s derivation requires that we abstract away from the details of actual populations. His theorem assumes no mutation, fixed fitness values, no fertility differences, and no geographic structure to populations. Of course, no population meets these conditions. Nevertheless, Fisher’s abstraction enables one to understand the fundamental relationship between additive variance in fitness and rate of increase in fitness.
In her article, “Modelling Populations: Pearson and Fisher on Mendelism and Biometry,” British Journal for the Philosophy of Science 53 (2002): 39-68, Margaret Morrison offers a similar perspective in examining Fisher’s methodological departure from Karl Pearson (1857-1936), which allowed him to accept Mendelism. Specifically, Fisher posited that biometrically measured quantities could be explained in terms of the action of an indefinite number of Mendelian factors:
…for Fisher this degree of idealisation was essential to guarantee his method, and hence the legitimacy of his conclusions. In other words, he could escape the difficulties associated with detailed Mendelian analyses [of relations between genes and inherited traits] by focusing on general principles. But in order to do this it was necessary that he assume a large number of factors in order to establish statistically the generality and validity of the principles. In the way that one can have knowledge about the properties of gases without detailed knowledge of the molecules and atoms that make up the gas, one could have knowledge of how a population would evolve without knowing the details of the heredity of all individual characters.
Effectively, this idealization allowed Fisher to reconcile continuous evolution with Mendelian inheritance in principle. According to Plutynski, recognizing that Fisher’s scientific contributions hinged on the deductive development of principles requires meditation on what it means to make scientific progress:
Progress in science, on most philosophical accounts, is concerned primarily with the generation and testing of novel hypotheses… Fisher was not a good Popperian in the sense that his significant contributions were not experimental. Rather, his genius was in developing mathematical models and following through with the consequences of these models. My view is that this lack of fit of Fisher’s contributions with the standard picture of science is a failure of the standard view of science, rather than a failure of Fisher’s contributions.
Plutynski might also have noted, given Fisher’s ideologically-tinged stress on induction as the only legitimate path to new knowledge, that Fisher was not only violating Karl Popper’s later conception of valid science, but (evidently) his own.
How, then, did deduction contribute to empirical science? Apparently deduced principles constituted schemes for verifying the plausibility and conceptual features of inherited and evolutionary phenomena, as well as for describing the necessary conditions for their existence. Fisher’s emphasis on genetic variety in a population as lending itself to selective advantage, the evolutionary importance of sex,** and the importance he ascribed to the selection of dominant genes, constitute examples of such phenomena, which could be investigated empirically.
So ostensibly does the “Fisherian runaway”. But, as Mary Bartley observes in her article, “Conflicts in Human Progress: Sexual Selection and the Fisherian ‘Runaway’,” British Journal for the History of Science 27 (1994): 177-196, Fisher’s invocation of sexual selection as a means by which certain features propagate rapidly through a population was not statistically derived. Indeed, she points out that runaway phenomena (specifically, the selection of low fertility in leading classes) were elemental to his account of civilizational decay. This notion, she notes, was present in Fisher’s eugenic thinking well before he developed his statistical reconciliation of genetics with natural selection. Subsequently, the concept of the runaway disappeared only to be revived and formalized in the 1970s with the rise of animal behavior studies and sociobiology. According to Bartley, the “runaway is undoubtedly more popular today than it ever was in Fisher’s lifetime,” though, she notes that unsurprisingly “the eugenic content of the theory is removed.”
Much more generally, Francisco Louçã (an economist and politician who came in fifth in the 2006 Portuguese presidential election—you really can’t make this stuff up) has argued in his “Emancipation through Interaction—How Eugenics and Statistics Converged and Diverged,” Journal of the History of Biology 42 (2009): 649-684, that the eugenic project was a major driver of statistical methodology from Francis Galton through Fisher. Louçã does not note the differences between Fisher’s inductive and deductive statistics, and so does not make a direct contribution to the methodology issue we are concerned about here. But his point is worth noting since he does discuss the transfer of Fisher’s statistical methods into econometrics through Harold Hotelling (1895-1973), who was one of Fisher’s many visitors at Rothamsted, and Tjalling Koopmans (1910-1985).
The history of economics stemming from this development is very much about the status deductive reasoning can have in developing knowledge about the real economy. Much as economists began to use regression analysis, while, at the same time devoting an unprecedentedly large effort to deductive economics, Fisher likewise developed methods, and concepts such as “variance”, to undertake empirical studies of uncontrolled populations at the same time as he was developing his deductive theories.. As James Tabery has shown in his paper, “R. A. Fisher, Lancelot Hogben, and the Origin(s) of Genotype-Environment Interaction,” Journal of the History of Biology 41 (2008): 717-761, in the 1930s Fisher became deeply concerned with developing means of disentangling and establishing the relations between genetic and environmental contributions to biometrically measured traits. This was, of course, elemental to the nature vs. nurture problem in eugenics, but Tabery also links Fisher’s addressing of this problem to the problems of experimentation at Rothamsted, where environmental factors could not be assumed to statistically average out.
But that still doesn’t explain what Fisher himself thought about the validity and function of purely deductive results.
—
Clearly there is quite a lot going on here, and I’m not even going to try and offer a conclusion. Historians and philosophers have both done a lot of work, especially in recent years, on various pieces of the methodological puzzle without putting them all together. What does seem clear, even from this whirlwind review, is that the historical actors themselves didn’t resolve the Tower-of-Babel problem in any complete or well-articulated way. Thus, the challenge for the historian is to figure out how actors grappled with it in their separate ways.
This post mainly deals with recent contributions, but one also shouldn’t neglect the work of figures such as William Provine, who did a lot of basic spade work in this area. Nor should one neglect important contexts, such as that developed in Sharon Kingsland’s important 1985 book (2nd ed. 1995), Modeling Nature: Episodes in the History of Population Ecology, which discusses the broader history of the deductive modeling of natural populations. I would also point to the two-volume 1987 series, The Probabilistic Revolution as an important step in attempting to gain a broad, synthetic understanding of these issues. Vol. 2 contains essays dealing with the present subject matter by M. J. S. (Jonathan) Hodge and John R. G. Turner, both of the University of Leeds, and both of whom, like Provine, are longstanding scholars of this area.
As always with a quick overview undertaken by an outsider to a body of scholarship, there is a likelihood of misrepresentation or neglecting an important scholar or work. For me, the appeal of doing something like this is to attempt to develop a personal first-order understanding of an incredibly rich area, and hopefully to help others do so as well. Naturally, please offer any corrections or additions in the comments.
—
*See Robert A. Skipper, Jr., “The Persistence of the R. A. Fisher-Sewall Wright Controversy,” Biology and Philosophy 17 (2002): 341-367.
**See Susan M. Mooney, “H. J. Muller and R. A. Fisher on the Evolutionary Significance of Sex,” Journal of the History of Biology 28 (1995): 133-149.