History and Historiography of Science

Sociology, Science Indexing, and Science Indicators in the ’60s and ’70s

Simultaneously reading a recent Guardian article on the issue of open-access scientific publication, and Robert K. Merton’s “The Sociology of Science: An Episodic Memoir” (in The Sociology of Science in Europe, 1977, pp. 3-141) spurred me to wonder whether science studies could aid scientists to transition to a new model of scientific publication that is up-to-date with technology, but that also retains the intellectual and institutional virtues of present models. My answer to this question is: probably not.

The thought occurred to me because of Merton’s consideration of whether or not his 1940s-era understanding of how and why scientific credit is assigned the way it is could have led to the establishment of something like the Science Citation Index (SCI) prior to its actual appearance in 1963.  Merton speculated on why it didn’t, but he also marked a growing contact in the 1960s and ’70s between the historians and sociologists of science, publication indexing, and the rising tide of “science indicators”.  He reckoned this contact would grow as both the sociology of science and science metrics matured.  Unfortunately, the 1970s actually seems to have been its high-water mark.

As early as his 1942 article* on the “norms” of science, Merton had observed that the “communal character of science” was “institutionalized” in the “pressure” to publish work, and in the “correlative obligation” to reference the sources to which one is intellectually indebted.  The incentives and sanctions for conformity to these practices “were required for science to work as ‘part of the public domain…'”  According to the Merton of 1977, these points supplied the “substantive characteristics of science required for the invention and application of a research tool that is largely specific to the history and sociology of science: the tool of the citation index and the correlative method of citation analysis” (48–49, his emphasis).

For Merton, “citation analysis” was one means (alongside, for instance, “content analysis” and prosopography) of investigating how the more intricate social and institutional structures of science bore upon the patterns of its cognitive development.  Yet sociologists did not invent the citation index: “Absent was the specific idea of devising a method for systematically identifying the observable outputs of scientists who were obliged to specify the sources of knowledge they drew upon…” Also, remarkably, “Absent was the basic perception … that there had evolved, long ago, a device for publicly and routinely acknowledging such intellectual debts in the form of reference and citation.”  More prosaically, the computer technology necessary for “generating and processing large batches of [then-rapidly-proliferating] citation data,” and for “working with complex models of linkages, both contemporaneous and genealogical,” had not been available (50–51).

Data

The actual motivation behind and path to the SCI was sketched out in 1955 articles by William Adair and “documentation consultant” Eugene Garfield, who ran with the idea.  (Garfield’s website has links to many articles about his work — here’s the pdf of “Citation Indexes for Science” (Science, 1955)).  The SCI was ultimately realized because of its utility for bibliographic research in science, and, Merton noted, could never have developed merely to serve the needs of a small body of sociologists and historians.

Nevertheless, its potential for sociological and historical study was seized upon by sociologists, notably Merton, as well as others.  Computer scientist and cognitive psychologist Allen Newell (of all people) wrote to Garfield in 1962, “Citation indexing will generate a spate of empirical work on the sociology of science….”  He then went on:

It is rather easy to predict, I think, that the publication and wide availability of an extensive citation index will have strong social consequences along the line of becoming a controlling variable for the advancement and employment of scientific personnel….  It makes little difference whether one likes this or not.

In the 1963 article† in which Garfield reported this correspondence, he himself remarked that quantitative distillations of citation data, such as the establishment of “critical impact factors”, could easily be turned to “promiscuous and careless” uses in tasks such as those Newell pointed to, an “abuse” of the resource.  He warned (Garfield 1963, pp. 44–45, his emphasis):

Impact is not the same as importance or significance…. Such quantitative data can have significance for the historian who can carefully evaluate all the data available. Surely the history of science must record the controversial as well as the non-controversial figure. However, the mere ranking by numbers of citations or the number of papers published is no way to arrive at objective criteria of importance.

Merton agreed, also noting the need for further assessment of the SCI’s coverage of international science, and for studying how different fields of study abide by different traditions of publication, such as the use of monographs not indexed in the SCI.

He then went on to discuss the rise of quantitative measures of science.  He highlighted Derek Price’s advocacy of quantitative studies of scientific work to historians of science in the early 1950s, how the “deafening silence of colleagues in the history of science [at that time] was scarcely calculated to reinforce Price’s continued efforts along the same lines,” and how, allied with sociologists, the work gained momentum in the 1960s and ’70s in areas such as the “demography of scientific research” (56).  (One of the best things about Merton ’77, by the way, is his barely restrained annoyance with historians’ intolerance of methodological innovation, which he permits himself to vent in small bursts of snark.)

Finally, Merton discussed the National Science Board’s early-1970s production of “science indicators,” which could potentially have major consequences.  As Science Indicators 1972 explained (57):

The ultimate goal of this effort is a set of indices which would reveal the strengths and weaknesses of U.S. science and technology, in terms of the capacity and performance of the enterprise in contributing to national objectives.  If such indicators can be developed over the coming years, they should assist in improving the allocation and management of resources for science and technology, and in guiding the Nation’s research and development along paths most rewarding for our society.

Merton noted that, unusually, the report flagged “the backward state of the art” in science indicators, thus undercutting its own immediate utility.  In view of its limitations, in 1974 the Social Science Research Council and the Center for Advanced Study in the Behavioral Sciences at Stanford convened a conference to address problems in the creation of science indicators, which “was attended by scholars in the history, politics, economics, philosophy, and sociology of science”.

That conference produced a volume, Toward a Metric of Science: The Advent of Science Indicators (1978), edited by Merton, historian and philosopher Yehuda Elkana, molecular biologist Joshua Lederberg, historian Arnold Thackray, and sociologist Harriet Zuckerman, and including authors such as Price, Garfield, and historian Gerald Holton (whose chapter was entitled “Can Science Be Measured?” — see the full table of contents and the highly reflective introduction here (pdf)).  It also led to the establishment of a group dedicated to the further study of science indicators.

(Thackray, by the way, seems to have had an interesting role in all this; there is an interview he conducted with Garfield in 1987 on Garfield’s website.)

This brought events up to Merton’s present, and he seemed fairly optimistic that things would continue to cohere going forward, if only because of their inevitability (59):

All apart from considerations of the applicability of science indicators for science policy, such indicators, usually tacit though occasionally explicit, find their way into every perspective on science, whether it be historical, sociological, economic, political, or eclectically humanistic.  In the end, these diverse perspectives, caught up in the social studies of science, are all variously needed to form the kind of science policy that can engage our respect.  Although apt development of these perspectives will of course not induce a consensus of policy, it can clarify the implications of clashes over policy.

So, what the hell happened?

In the early 1960s the founder of the Institute for Scientific Information (ISI) explicitly assigned to historians of science the task of establishing the relationship between admittedly naive indicators and more nuanced perspectives of scientific “importance”.  Yet, historians and their colleagues seem to have passed on this opportunity to grapple with difficult questions of how to analyze the scale and complexities of recent science as properly as one might.  Meanwhile, ISI has morphed into the Thomson-Reuters science information behemoth, which now peddles science indicators as a product you can buy.

In the 1970s, the conceptual and logistical resources of historians and sociologists of science and the science citation and indexing communities seem to have been at least close to parity.  But, even then, Merton’s aspirations for contributing to citation indexing and the production of science indicators appear to have been too much to handle.  Since that time, the population of the historical and sociological communities has expanded a great deal, but I don’t know that we have kept hold of the conceptual and technical tools necessary to engage with an industry that is as complex and mammoth as science information and science publishing today.  The path away from the margins is, I think, a hard and steep one.

*Robert K. Merton, “Science and Technology in a Democratic Order,” Journal of Legal and Political Sociology 1 (1942): 115-126, later republished as “The Normative Structure of Science,” in The Sociology of Science: Theoretical and Empirical Investigations (1973).

†Eugene Garfield, “Citation Indexes in Sociological and Historical Research,” American Documentation 14 (1963): 289–291 (pdf).