His inclusion might have been awkward, because it would have suggested that rates should go as high as 9%, when the Fed still had them close to 0%. In subsequent hearings, at least three members of Congress pressed Fed Chairman Jerome Powell to explain this omission. Powell promised that the section would return in his next report. And so it was when the summer edition was published on June 17, though only after the Federal Reserve began to catch up with the rule’s prescriptions by rapidly raising rates.
In terms of controversies, the disappearance of a three-page section in a lengthy policy report was considerably less. It got little media coverage. Still, it was important. It brought to light a decades-old question that is asked most insistently amid runaway inflation: Should central banks limit their discretion and set interest rates according to unbiased rules?
The search for rules to guide and constrain central banks has a long history. It dates back to the 1930s when Henry Simons, an American economist, argued that policymakers should try to keep a predetermined price index “on the record,” a novel idea at the time. In the 1960s, Milton Friedman asked central banks to increase the money supply by a certain amount each year. That monetarist rule was influential until the 1980s, when the relationship between money supply and GDP broke down.
Any discussion of rules today evokes a seminal article written in 1993 by John Taylor, an economist at Stanford University.
In it he presented a simple equation that came to be known as “Taylor’s rule.” The only variables were the pace of inflation and the deviation of GDP growth from its trend path. Plugging them together produced a recommended policy rate path that, in the late 1980s and early 1990s, was nearly identical to the real fed funds rate, the Fed’s target overnight lending rate. So it seemed to have great explanatory power. Taylor argued that his rule could help guide central banks on the right path for rates going forward.
However, just as the Taylor rule began to attract the attention of economists and investors alike, its explanatory power weakened. In the late 1990s, the recommended Taylor rate was consistently lower than the fed funds rate. That sparked a cottage industry of academic research on alternative rules, much of it based on Taylor’s original ideas. Some put more weight on the GDP gap. Others added inertia, as central banks take time to adjust rates. Another group went from actual inflation to forecasts, trying to account for the lag between policy actions and economic outcomes. In its reports, the Fed usually mentions five separate rules.
The appeal of the rules lies in their cold neutrality: they are only swayed by numbers, not by fallible judgments about the economy. Central bankers love to say that their policy decisions depend on data. In practice, they sometimes have a hard time hearing the data when its message is unpleasant, as happened with inflation over the last year. Central bankers found numerous reasons, from the supposed transitory nature of inflation to the limited recovery in the labor market, to delay raising rates. But throughout that time, the set of rules cited by the Fed was unequivocal in its verdict: they needed to be tightened.
However, the rules are not perfectly neutral. Someone first has to build them, deciding what elements to include and what weight to attribute to them. Nor are they as neat as the convention of calling them “simple rules of monetary policy” implies. They are simple in the sense that they contain relatively few elements. But just as a bunch of simple threads can make a messy knot, the proliferation of simple rules has created a bewildering array of possibilities. For example, the Cleveland Federal Reserve publishes a quarterly report based on a set of seven rules. Its most recent report indicated that interest rates should be between 0.6% (according to a rule centered on inflation forecasts) and 8.7% (according to the original Taylor rule), an uncomfortably wide range.
Furthermore, each rule is built on a base of assumptions. These typically include estimates of the long-term unemployment rate and the natural rate of interest (the theoretical rate that supports maximum output for an economy without stoking inflation). Modelers must also decide which of a range of inflation indicators to use. Slight changes in any of these elements, common during periods of economic flux, can produce large oscillations in the rates prescribed by the rules. For example, an adjusted version of the Taylor rule, based on core inflation, would have recommended an interest rate increase of 22 percentage points over the last two years (from negative 15%). Slaveryly following such guidance would generate extreme volatility.
One possible solution is to combine multiple rules into a single result. The Cleveland Federal Reserve does exactly this, constructing a basic median from the seven rules it tracks. Using this as a benchmark, Powell and his colleagues should have cautiously started raising rates in the first quarter of 2021 and brought them to about 4% today, more than double what they actually are. That is much more sensible as a recommendation than the conclusion drawn by any individual policy rule.
Such a median could never replace the analysis of a variety of data by central banks. But there is a big difference between taking the rules seriously and treating them like holy scripture. After all of last year’s inflation blunders, a good showing of rules deserves a closer look in policy debates. And they certainly deserve more prominence than they currently have as a brief section in monetary reports that the Fed can choose to skip when inconvenient.