On initially looking at the articles on sciences in North America that BJHS has made available for free, one possible exercise that occurred to me was to draw some links between the pieces. Perrin Selcer’s piece on “standardizing wounds” in World War I (“the scientific management of life in the First World War”), my piece on World War II operations research, Jeff Hughes’ review of recent a-bomb literature, and Jamie Cohen-Cole’s piece on Harvard’s Center for Cognitive Studies could conceivably be linked fairly easily.
On a closer look, this exercise seemed to be less potentially useful than I initially imagined. While it might conceivably be possible to connect World War I-era wound treatment standardization to OR’s contributions to military planning, or OR to cognitive science (via decision theory), or, to go down another branch, to connect the treatment standardization issue to Christer Nordlund’s piece on hormone therapies creating improved people, it might not be wise. It would be easy to wave one’s hands around about standardization, flexibility, knowledge, and large-scale practice, but the historical picture produced would, it seems to me, be more misleading than helpful.
Though these papers might all work their way into an edited volume with a sufficiently vague theme, none of them were written with the others in mind (for obvious reasons). Importantly, though, even if the papers were more closely related, it would still be difficult to forge connections between them, because the papers’ sense of their raison d’être creates writing styles that do not lend themselves to the making of strong historical connections.
Cohen-Cole’s paper is a nice example. In it, there are hints of links with the broader history of cognitive science—reaction to behaviorism, plus a little discussion of Chomsky’s linguistics and psycho-physiological experimentation—but the main concern is clearly to understand the historical social and intellectual dynamics underlying the Center’s history, interpreted as a move from an “interdisciplinarity” model (where novel tools are created through disciplinary exchange) to a more stagnant “multidisciplinarity” model (where multiple disciplines work in parallel on a common problem) of “cross-disciplinary” work. As an institutional history, it should be read with Hunter Heyck’s very nice (and, I believe, prize-winning) 2006 Isis article, “Patrons of the Revolution: Ideals and Institutions in Postwar Behavioral Science”. In the end, though, Cohen-Cole is most interested in satisfying the epistemic imperative than in really forging strong connections with other bits of history, i.e. the operative problematic is sociological rather than historiographical (or chronological, if you will).
As I say, this is true of all the papers mentioned here, and, in fact, all share a concern with the acceptance of the authority of the claims of their subjects and the creation of spheres of practice. Cohen-Cole does this by depicting interdisciplinarity as a means the Center for Cognitive Science could use to foster a significant sphere for the exchange of ideas about the mind. Selcer and Nordlund both offer “scientist promised to revolutionize the world with the proposed techniques, but it didn’t happen” narratives of the failure to create overly ambitious spheres of practice. My piece denies that OR had much to do with the question of science-military spheres of practice, but is concerned with the issue all the same.
My intent in offering a null argument on the subject, incidentally, was to use the “dog that didn’t bark” principle to try and recharacterize and recontextualize wartime OR as something pretty akin to military doctrine-building (a topic, I later learned, championed for 50 years by the military historian of technology I. B. Holley), and to detach it somewhat from more obviously “scientific” things like the design engineering of fire control devices and later command-and-control systems. The paper gestured in this direction in the conclusion, but didn’t do much with it (though this idea did pay dividends when I hit up some British archives the spring after the paper was accepted in January ’06).
One further exercise: looking at the last sentence or two of each paper.
Selcer: “Carrel’s claim to the technical expertise to govern society was founded upon a metaphor. In mistaking a powerful analogy for concrete knowledge, he claimed an impossible power: the power to see an atom and a star and everything in between in a single glance.”
Nordlund: “The new physiology’s hormone therapy never became the agent of positive social reform which Berman and many other researchers had anticipated. Whether or not the new biology’s gene therapy will become such an agent remains to be seen.”
Cohen-Cole: “Because of the ethos that equated interdisciplinary tool exchange with creativity, the Center’s eventual multidisciplinary research culture represented a failure that was produced in part by the Center’s own productivity.”
Thomas: “If we question what we mean by science and scientific method in relation to the military, what has been seen as the rise of a new monolithic paradigm suddenly fractures and becomes part of a long-standing string of debates in both the military and science about how good our metaphors and models really are. These are debates about the strength and integrity of bodies of knowledge and about what is well understood and what has remained hitherto unknown.”
My ending here is embarrassingly grandiose (and, as my book manuscript exhibits, I haven’t yet kicked the habit!), but I will obnoxiously allow myself to toot my own horn on a couple of points, because I think they are important (this is, after all, the purpose of publishing in journals, is it not?)
First, my piece is the only one of these (excepting, obviously, Hughes’ review) that lectures historians rather than the scientists who are the subject of the article. Where the other pieces (and they are in the professional mainstream here) deploy the wisdom of the science studies professions to decode the historically naive understanding of how spheres of scientific practice are formed and collapse (via failure to understand the limits of “metaphors”, or the fragility of their defining “ethos”, or the complexity of the social change they sought), I am concerned that preoccupation with spheres of practice has led us in some cases to getting history wrong.
Second, though I don’t really do it in the piece, I do want to refocus from the contents of the scientific work to the contents of the “long-standing string of debates”, rather than on whatever sociological phenomenon we might take the work to illustrate. It is a strange feature of the historiography that historians identify historically interesting topics by the controversies that develop around them, and then proceed to demonstrate how scientists promoting their work were naive because they failed to avoid controversy, without really granting historical critics credit for the terms on which the controversy develops—their criticisms merely presage our theoretical ideas.
On bad days it feels like the history of science profession can be boiled down to the catch-phrase: “Science: Useful, But Don’t Believe the Hype!” Historical case example becomes just a tool used to make this point. It may be that we feel no one in even an audience larger than four or five people is interested in hearing a more localized message, which is why the best conversations always seem to take place somewhere other than in public fora.