This post is more a meditation than a proper review of economists’ and historians’ interest in how to evaluate the benefits of technology. It is prompted mainly by recent articles by Paul Krugman (in the New York Times) and Matthew Yglesias (at vox.com) about a current state of stagnating or falling productivity in spite of new technologies being produced. The measurement of the benefits of technology in terms of productivity may seem somewhat odd, narrow, or sinister to some, so I thought it would be useful to look at some of the history of thought surrounding these issues to show why progressive economists and policy wonks regard it as a useful measure.
The benefits of medical technologies—ethical and cost-benefit quandaries in certain cases aside—are relatively easy to measure in terms of their ability to improve health and the efficacy of treatments. However, the benefits of other technologies, marvellous or disruptive as they may be, are harder to evaluate. Because technologies tend to replace other technologies and to alter prior modes of working and living, technological change tends to create both winners and losers, leaving it difficult to determine whether, on balance, the process is beneficial at all.
Historians often content themselves with this observation. Wishing mainly to complicate vulgar narratives of invention—and especially optimistic, hyperbolic claims about the “world changing” effects that inventions have—historians typically aim to introduce a “social” dimension by annotating histories of technological change with accounts of the positions of opponents as well as proponents, of crucial complementary or inhibitory contexts, and of what economists call negative externalities. These histories typically suggest the possibility of alternative paths not taken, which might include no change at all. The key message is that the course of technological change is strongly influenced by circumstance, by political and economic power, and by cultural values. In fact, spotlighting these contexts, not the technology itself, is often the point of the history. Unfortunately, historians often make little effort to actually assess whether society was in some general sense “better off” after technological change.
A related but distinct position is offered by historian David Edgerton in his 2006 book The Shock of the Old. In Edgerton’s view, far too much attention is paid to invention and its effects, and not enough to the medley of technologies, old and new, that are used in everyday life. I would characterize Edgerton’s position as “economic” insofar as he is interested in assessing technological significance. The trick is that significance tends to inhabit times and places far removed from the circumstances of invention and widespread cultural interest in the technology. Currogated iron is a highly significant construction material in poor countries. Jet engines are much more significant today than they were in the years when they made a cultural mark—yet there are few accounts of the recent history of the jet engine. Generally, Edgerton wields such examples to serve either of two goals: to play down the grand-scheme significance of purportedly radical technological shifts, and/or to play up remarkable historical shifts hidden within the mundanities of patterns of technological use.
Many economic treatments of technology tend to actually be rather complementary to historians’ treatments. Of course, the historians’ interests more or less descend from Karl Marx’s (1818–1883) analyses of class and capital, which emphasized that the benefits of machinery accrued to its owners, even as those owners used it to deprive labor of its value. In the 1980s and 1990s, the observation inspired a raft of post-Marxist critical accounts of “deskilling”:
A crucial distinction between Marxist and post-Marxist criticism is the former’s economic concern with materialism, and the latter’s humanistic concern with personal autonomy. For Marx and his followers, the erosion of worker skill and autonomy was not so much the problem with capitalism as was the fact that the (clear) material benefits of capital improvement were not communally distributed.
One the great pre-World War II thinkers interested in the relations between technology and social benefit was the British communist crystallographer J. D. Bernal (1901–1971). His 1939 book The Social Function of Science was dominated by a survey of technical labor in British society. Although most interest in science at that time focused on its academic manifestations, Bernal drew no essential boundaries between academic and industrial research or between science and technology. In his assessment, most scientific labor was dominated by capitalist and militaristic interests, preventing its (again, clear) benefits from flowing to society. In his view, only a communist society could fully realize the benefits of scientific inquiry and technological change, and, therefore, communist societies were most likely to continue supporting scientific research in the long run. Absent the bits about the necessity of a communist society, this is, of course, very similar to Dwight Eisenhower’s critique of the “military-industrial complex” in his 1961 farewell speech.
Meanwhile, among capitalist economists in the 1920s and especially the 1930s, the benefits of capital improvement were not so clear, and were hotly debated since the introduction of new technologies often seemed to lead to low wages and unemployment. In his 1938 book Full Recovery or Stagnation, Harvard economist Alvin Hansen (1887–1975) argued that, independent of government investment, capital efficiency and low population growth might make a return to full employment and an escape from economic Depression impossible.
The idea of “stagnation” contrasted with Hansen’s Harvard colleague Joseph Schumpeter’s (1883–1950) interest in what he called “creative destruction” in his 1942 book Capitalism, Socialism, and Democracy. In Schumpeter’s view, it was true that some skills were made obsolete by technological improvement, but it was also true that improvement created the need for new skills. It was true that people suffered from technological change, but it was also true that many people benefited. In all, Schumpeter shared Marx’s optimism that, on balance, the process of technological change brought material benefit to society, though, for him, the course of this innovation was unpredictable, and was to be discovered through individual initiative, and was likely to be stifled through the domination of the economy by entrenched interests like labor elites.
The reason, then, that labor productivity became an important means of measuring the benefits of technology is because it is a reasonable way of measuring whether material benefits are indeed accruing to society through the implementation of various new technologies.
These economic measures of technological benefit are actually charmingly unromantic. In this conception, “technology” is basically any change in the nature of capital that spontaneously alters a “production function,” i.e., all the various inputs of raw materials, capital, and labor that go into the making of some product that people want to buy. A change in technology/capital may be some mundane improvement that makes it possible to produce one unit of output for $0.37 instead of $0.41 using the same amount raw material and labor, thereby rendering “labor” more “productive.”
Ostensibly, there are various ways in which such technological improvement can be beneficial to society: the business could use its extra four cents per unit to make further investments, it could pass all or part the savings on to consumers to undercut competitors (and consumers can spend that extra up-to-four cents on additional goods thereby building up other industries), it could increase worker wages, or it could retain the savings as profit and pass it on to investors in the form of dividends (who might make additional investments, spend it on goods, or pocket it).
Obviously, not all of these outcomes are of equally broad benefit to “society.” The presumption that labor productivity will actually benefit laborers hinges on the assumption that laborers are in a position to demand that the fruits of their increased “productivity” be passed on to them. Recent concern, articulated by economists such as Larry Summers, that we may now be in a state of “secular stagnation,” à la Hansen, has implications for whether laborers will be in a position to benefit from increased productivity. And, of course, if technology is not actually making labor more productive at all, that likewise suppresses the ability of labor to demand that gains in profits be passed on to them.
At this point we are in a position to pull a number of threads together. Generally, our most prominent discourse surrounding contemporary innovation focuses on “Silicon Valley”—iPhones! Everyone’s buying them! Technology! Future! However much we may like this shiny, well-designed technology, however much we may prefer it (to use economists’ terminology) to our old phones, however much businesses may be adopting it, it doesn’t actually seem to be making workers more “productive” in any way measured by dollars and cents.
This is an economic observation, in that the real productivity benefits from information technology seem mainly to have accrued in the 1990s. It is also an Edgertonian observation, in that the disruptive Schumpeterian adoption of “new” technologies—whatever its cultural imprint and whatever its effect on particular businesses—may not actually represent such a radical departure from our old ways of doing things. It is also a Bernalist point, in that innovation seems to be concentrating on the satisfaction of bourgeois amusement, superficial shifts in business practice, and tech company profits rather than making a material difference in the ability of people to be more productive and live better.
However, we might also introduce another Edgertonian point, which is that, for all the cultural concentration on Silicon Valley, that region’s products still only represent a fraction of innovation. Historians may gnash their teeth and rend their garments over utopian West Coast rhetoric for its supposed baleful effect on “the way we think about technology,” but that doesn’t mean that that rhetoric reflects actual business and government policy, or the actual distribution of innovative effort. Lots of technological improvements are being made in other areas that leave far less of a cultural mark, e.g. in materials engineering. We would have to be subscribers to obscure trade magazines to notice them. This is why the point from the Wells Fargo study cited in Yglesias’s piece, that there seems to be no increase in productivity across various economic sectors, is important.
So, when we discuss labor productivity as a means of measuring the benefit of technology, it certainly excludes many of the measures that we might use to assess technology’s ability to increase our satisfaction with our lives, or to exert change for good or ill on our culture and on the way we work. At the same time, it is important not to be dismissive, and to realize that discussions of technology and productivity stem from very old and difficult debates about whether and how “technology” can actually concretely make people’s lives better in terms of their material well being.