Tuesday , November 5 2024
Home / Real-World Economics Review / The proliferation and efflorescence of indicators

The proliferation and efflorescence of indicators

Summary:
From Ken Zimmerman (originally a comment) In the early 20th century the emergence of the United States as a global force unparalleled and nearly unrivaled in material might catapulted obscure economic indicators into central, abiding elements of national life. No “man on the street” would have given a thought to gross national product or national income in the 1920s or at any point before then, and not simply because that number didn’t exist. People wouldn’t have thought of their nation or their society or their own lives in terms of the collective material production of their country. And they would not have marked success or failure by a series of indicators. That changed markedly after World War II, for two reasons. Though the basic contours of unemployment, GDP, and inflation were

Topics:
Editor considers the following as important:

This could be interesting, too:

Merijn T. Knibbe writes ´Fryslan boppe´. An in-depth inspirational analysis of work rewarded with the 2024 Riksbank prize in economic sciences.

Peter Radford writes AJR, Nobel, and prompt engineering

Lars Pålsson Syll writes Central bank independence — a convenient illusion

Eric Kramer writes What if Trump wins?

from Ken Zimmerman (originally a comment)

In the early 20th century the emergence of the United States as a global force unparalleled and nearly unrivaled in material might catapulted obscure economic indicators into central, abiding elements of national life. No “man on the street” would have given a thought to gross national product or national income in the 1920s or at any point before then, and not simply because that number didn’t exist. People wouldn’t have thought of their nation or their society or their own lives in terms of the collective material production of their country. And they would not have marked success or failure by a series of indicators.

That changed markedly after World War II, for two reasons. Though the basic contours of unemployment, GDP, and inflation were formed in the 1930s, not until after the war did they coalesce into simple, straightforward statistics that could be tracked, issued, and debated on a regular, ongoing basis. In essence, the numbers were invented in the 1930s but marketed only after 1945. And with a few leading indicators in hand, people went about doing what they always do: they invented more.

Marketing was crucial; without that, the numbers might have remained useful but obscure. The proliferation of indicators after the war was drivenin part by government, in part by industry, and in part by media outlets such as Luce’s Time and Fortune, along with Business Week and newspapers across the country. Private indicators were developed by professional associations such as the Institute for Supply Management, academic institutions such as the University of Michigan, and nonprofit organizations such as the Conference Board. Meanwhile, the Census Bureau, various Federal Reserve bank branches, the Bureau of Labor Statistics, the Bureau of Economic Analysis, and others continued their statistical work, and so the indicators evolved in scope, scale, and complexity. The final element was the vast proliferation of information collected, collated, and analyzed by newly created global agencies such as the United Nations and the World Bank, of all which developed a hunger verging on a fetish for data and statistics that has lasted till today.

But it wasn’t simply the efflorescence of indicators that mattered. It was their movement to the center of American culture and of societies throughout the world. The years after the war saw these indicators go global, care of the adamant efforts of the United States to demonstrate its superiority. They went global as well because of the particular need of the emerging international community for common standards and common metrics to assess whether or not the world was on a constructive economic trajectory. For many in these years, the world war that consumed so many lives was seen as a product of stagnant, bankrupt economic philosophies and systems. Preventing a new collapse and insuring that the people everywhere were each year more able to meet their material needs was understood as central to peace and security.

These forces combined to create what we now know as “the economy.” Before there were metrics and indicators, “the economy” didn’t exist. No, economic activity wasn’t invented in the middle of the twentieth century. But this thing called “the economy” was. Until the 1940s, there was no “the economy.” People did not use the term and they had only just begun to think of the material affairs of a nation as a coherent and cohesive subject that could be defined, measured, and tracked over time. Look up “the economy” on Google’s comprehensive Ngram Viewer site. Ngram is a digital repository of all texts ever written before the advent of copyright. Until the mid-1930s, the term the economy appeared hardly at all; after that, its usage soared. In short, the leading indicators invented “the economy.” That invention includes inflation.

Evolving as it did when it did, the economy as a concept is inseparable from the nation-state. Prior to the later decades of the twentieth century, you could make a strong argument that the preponderance of economic activity was contained within national boundaries, even if those were porous, and that while the nation-state as the primary economic unit raised issues, it was better than no unit. Economics as a discipline emerged in the nineteenth century just as the nation-state did; hence, economists took the state as a closed system that defined this thing called “the economy.” Well into the late twentieth century, economic activity was deeply constrained by geographic borders and by the efforts of governments to control what went on inside those borders. Currency rates were zealously guarded by governments, and gold acted as a common denominator to value those currencies against one another. Tariffs and taxes, even if they were relaxed in the name of free trade, were barriers that states erected to defend themselves or keep out competition. Central banks were (and in most respects still are) national entities established by national governments to manage national economic affairs and the printing of national money.

So for economists and statisticians in the first half of the twentieth century, it made some sense to define “the economy” as a closed system that began and ended with national borders. Yes, the state wasn’t the only economic unit, and while it suited economists to model reality as if states were hermetically sealed units, there was also trade and flows of money and people. Nonetheless, economists developed theories that treated a national economy as a closed system that will always, in the end, result in equilibrium. Classical economic theory dictated that forces of supply and demand govern economic life, and that the two eventually find their perfect balance. Prices, wages, production—all will balance out in the end: that is why the system of accounts developed by Kuznets and others had two sides of a ledger: production and income, inputs and outputs, each of which must precisely match the other. It was a purely scientific approach to an economy, like a vacuum chamber where no matter can be destroyed and none created.

The invention of “the economy” with numbers to measure it was a vast improvement over what prevailed before. It gave governments a way to measure and assess what was happening and a means to test whether policies were doing good or causing harm. It gave society a greater degree of confidence (sometimes false) that people need not be at the mercy of mysterious and dangerous economic storms. The overdependence on indicators that evolved over the final decades of the twentieth century and into the twenty-first is a core issue, but that isn’t an indictment of the creation of these indicators in the first place. To repeat an earlier analogy, the tools of the Renaissance navigators—that astrolabe and rudimentary compasses—were a far cry from the high-tech systems of today and in many cases were woefully inaccurate and even fatal to the brave seaman who relied on them. But compared with navigating by the stars, sun, and moon as the only reference points across thousands of miles of uncharted oceans, they were invaluable. The same is true of the national indicators invented in the early and mid-twentieth century; they were an unequivocal improvement compared with what came before.

Few questioned that there was high inflation in the 1970s, but many wondered both what was generating it and how high it actually was. Similarly today many believe inflation is increasing. And the two questions remain. The first question was one of economic theory; the second one of how the statistic called “inflation” was calculated. And if those issues weren’t intractable enough, there was an additional twist: throughout the 1970s, the agency responsible for calculating prices, the Bureau of Labor Statistics (BLS), was engaged in a multiyear internal debate about whether it was overstating inflation. The result, ultimately, was a new formula that showed, not surprisingly, that inflation wasn’t quite so high as had been thought and not as severe as people experienced in their everyday lives.

There were two forces at work: one was what that proverbial man on the street experienced when he bought groceries, or a car, or filled his tank with gas. The other was what the consumer price index said each month. The former was the lived experience of prices; the latter was a statistic, an indicator, that we call “inflation.”

Like the other leading metrics discussed so far, “inflation” was a product of the early twentieth century. It emerged somewhat earlier than national accounts and just a tad after unemployment statistics rose to the fore in the days of Ethelbert Stewart. The modern concept of inflation was an outgrowth of government efforts to measure prices, which stemmed from the same Progressive impulses to assess whether the industrial system was allowing most citizens to meet their basic needs. There had been a few initial efforts to measure prices in the late nineteenth century, and the BLS had done some preliminary surveys of prices in a few cities in 1907 and again in 1912. Then in 1916 the reformist-minded BLS commissioner Royal Meeker authorized a survey of the expenditures of more than two thousand families in the District of Columbia in order to answer a simple question: “What does it cost the American family to live?” That, in turn, led to the first official “cost of living” index published in 1918. Refinements to the index, however, were slow to evolve.

By the mid-1930s, the BLS was still using the basic methodology of 1917, with quarterly surveys of a basket of consumer goods taken in cities across the country. Though the surveys are more complex and cover more areas of life they remain the basic methodology for computing inflation today. Along with dozens of inflation estimates based on no particular methodology.

No number has been the subject of greater controversy and antagonism than the consumer price index (CPI), which is the source of the official inflation statistic. The CPI was the direct successor to prewar cost of living indices, and was christened in 1945 as the “consumer price index for moderate income families in large cities.” Ever since, it has engendered dark conspiracy theories about government officials purposely understanding the cost of living in order to pay citizens lower Social Security benefits and allowing corporations to underpay workers. By the early twenty-first century, the CPI affected the government benefits of nearly eighty million people. And given that cost-of-living adjustments in wages and benefits are often pegged to inflation, the CPI may be the leading indicator that most directly impacts our everyday lives.

It was never intended to carry such weight. Speaking in 1952, the deputy commissioner of the BLS responded to the criticisms of the agency’s work and placed those in the context of a vastly changed landscape. People were beginning to use indicators in ways that few professional statisticians or economists had anticipated, and as we saw, the sudden ubiquity of these numbers and the way they were being disseminated in popular culture transformed what had been modest indices for use by government or academics into social, political, and cultural touchstones. The commissioner warned that the statistical profession was “scarcely prepared, and certainly not organized, to meet the serious responsibilities placed upon us by the new use of statistics.” If both government and private compilers were to retain credibility, they had to be rigorous about methods and responsive to critics.

Based on –

Zachary Karabell, The Leading Indicators: A Short History of the Numbers that Rule Our World.

Andrew L. Yarrow, Measuring America: How Economic Growth Came to Define American Greatness in the Late Twentieth Century.

Leave a Reply

Your email address will not be published. Required fields are marked *