When I first started EWP in 2008, I labored under the misapprehension that historians of science were not only interested in the details of arguments taking place in other areas of science studies, but that those details actually played a large part in setting historiographical priorities. In that spirit, I did an eight-part Q&A with Harry Collins and Rob Evans over their new “third wave of science studies” project, Studies of Expertise and Experience (SEE).
I first encountered the early SEE papers when I was finishing up work on my dissertation. Then and now, I have found traditional science studies models to be unhelpful in untangling the methods and sources of legitimacy in the “sciences of policy” that I study. SEE seemed to be closer to the mark. I would not go so far as to say that I have ever been particularly invested in the SEE program, or that I use its ideas actively. It has a lot of components — such as debating the use of “hawk-eye” technology in tennis, or playing “imitation games”, or developing a deep well of analytical concepts — that are beyond the pale of what I do as a historian. However, I do view SEE as compatible with my historical work, and was therefore eager to do a bit of promotion, since I still thought if historians were not very united by interlinking their research projects, they were united by conceptual concerns that informed their research and writing.
I now believe that historians’ interest in conceptual debates is not actually very deep either. These debates seem to play a vaguely inspirational role, more a matter of footnotes and casual conversation than real engagement. To my mind this isn’t a huge tragedy, because I believe it is a lack of synthetic work rather than a lack of conceptual resources that most constrains historical work today. Still, I remain intrigued by the lack of any real historical component to the SEE program. And so I am currently typing up a paper on this subject to present at the Fifth International Workshop on Studies of Expertise and Experience (SEESHOP5) in Cardiff, the weekend of June 10.
I would argue that history was at the very center of the well-known conceptual programs of the 1970s and ’80s. The inadequacies of prior philosophy and sociology of science to explain the historical record of the actual practice of science, and how it fit into society, were understood to be a serious flaw. A common response, therefore, was to develop theories that could account for the historical record in all of its messy detail, a project which would entail an abandonment of philosophy of science in favor of reduction to a sociological constructivism.
There are two approaches to the historical record that made such a reduction seem possible and desirable. The first was a kind of reduction to a phenomenology of historical action. Since the major talking point of constructivism was that correctness does not speak for itself, historical phenomenology reduced history to the acceptance or rejection of claims, leaving it opaque (and irrelevant) as to what precise role correctness had vis-à-vis purely social factors in leading to acceptance or rejection. The advantage of this approach was that it acknowledged that correctness had some role to play in the unfolding of history. Bruno Latour’s “Actor-Network Theory” and Andrew Pickering’s “mangle” both espouse some version of this view.
The other approach is a reduction to a historical regression, which is less interested in explaining the course of history itself than it is in explaining historical actors’ beliefs and actions. Since acceptance or rejection of every claim must, in some sense, be based on confidence or lack thereof in the methods of persuasion used, one can regress through time to discover what prior persuasive means were used to establish that confidence in the first place, and how that confidence was transmitted through time. Historical accounts based on regression are thus purely sociological, and have the advantage of heuristically leading to the discovery of new sociological phenomena (such as gaining confidence in standardized instrumentation) necessary for the successful construction of knowledge. Such accounts tend to be more interested in the origins of ideas and practices rather than their use. Harry Collins was a key proponent of sociological regression.
(Most of this follows from my discussion of Schaffer’s 1991 criticism of Latour.)
The key historiographical legacy of the constructivist program is an abiding interest in the foundation of cultures of trust, and the origins of ideas and practices. In the case of the sciences of policy that I study, this has created an expectation that these sciences’ success has hinged on their ability to gain trust in scientific methodology as a basis of political decision by aligning their work with policymakers’ interests, after which point these sciences became politically powerful and academically influential.
This expectation is perhaps 1/4 trivially correct and 3/4 false, or at least misleading. It is trivially correct in the sense that scientist advisers have never claimed to do anything other than further the goals of policymakers, and have relied on policymakers to set legitimate goals. Most advisers worked in private consultation with policymakers, and in general do not seem to have been especially instrumental in maintaining public confidence in policies in any period.
The science studies view is false or misleading in that public facets of these sciences were primarily academic and theoretical (game theory is a good example), wherein purported objectivity or universality of conclusions was based on the idea that conclusions followed from stated axioms. However, such purported qualities would not be taken to imply that theoretical conclusions had normative implications separate from the values informing a policy, nor an automatic applicability to the intricacies (or simplicities!) of real policy problems.
Public testimony and advocacy (whether by advisers or theoreticians), when it occurred, was based on the rules of all debate, which was that successful arguments made better use of available evidence than competing arguments. Advocates’ arguments were vulnerable to embarrassment through countervailing arguments — and indeed they have been repeatedly embarrassed by journalists and independent experts. Unsurprisingly, though, public debate also often ends in a morass of competing claims. However, this fate for some debates should not distract attention from the way expertise ordinarily works within organizations.
Far from being based on trust in scientific proclamations, most policymaking that I have studied tends to occur in continuous dialogue, wherein policymakers are able to gather expert opinions (including non-technical experts, such as local perspectives) and reconcile them into policies, while experts learn how their conception fits into larger policy frameworks.
Over time, even though policymakers are not necessarily conversant in the finer methodologies of the experts who surround them, they are able to non-arbitrarily judge the nature and utility of expert contributions, even on a case by case basis. It seems to be uncommon that experts gain an original trust and then are simply allowed to dictate policy thereafter.
In this way, policymaking and management can themselves be seen as an intellectual pursuit, if not exactly a specialist skill, which historians need to take seriously. Major questions become how different kinds of expertise are identified, called upon, and organized, not whether they have achieved long-term confidence or even dominance. (This bears on my interest in how minor bureaucratic adjustments become cloaked in grand intellectual narratives.)
This vision of expertise in organizations can, I think, be reconciled with the problem of public confidence in that (as near as I can tell) public confidence accrues mainly to policymakers and organizations, who are or are not considered preferable to possible replacements, not to the experts who work for these organizations. If indeed organizations are interested in maintaining public confidence, they will use their experts to achieve results consistent with their claims to operate in the public interest, otherwise they will be vulnerable to embarrassment.
These historical processes of dialogue, ongoing assessment of expert contributions, and organization of expertise fit rather well into SEE frameworks, and clumsily into science studies frameworks emphasizing trust and authority. SEE strikes me not so much as a new way of thinking about expertise, so much as it is a new language for discussing how expertise has been thought about and dealt with. As such, it should be seen as an aid to thinking about historical ideas concerning expertise rather than as a way of escaping them.
As I say, I’m not heavily invested in SEE, but I’m interested to see what’s going on with the motley crowd that seems to be attracted to it, and whether it is a fluke that history played the role it did in earlier science studies, or whether it’s now a fluke that it seems to play very little role at all.