Thursday, January 18, 2018

The Global Output Gap Has Closed: What Next?

A decade after the global financial crisis circa 2008, the global economy has finally recovered. Tthe Global Economics Prospects 2018 report just published by the World Bank, subtitled "Broad-Based Upturn, but for How Long?" tells the story. 
"The global financial crisis tipped the global economy into a deep recession that affected first the advanced economies but spread—especially with the subsequent collapse of commodity prices—to emerging market and developing economies (EMDEs). Recoveries have been slow, but by 2018 the global economy is expected to return to its potential for the first time in a decade as the global output gap is expected to be closed. This in turn could mean a continued withdrawal by advanced economies of the extraordinary policy accommodation that was provided during the crisis, with important spillovers to EMDEs through trade and financial linkages. ...

"A broad-based cyclical global recovery is underway, aided by a rebound in investment and trade, against the backdrop of benign financing conditions, generally accommodative policies, improved confidence, and the dissipating impact of the earlier commodity price collapse. Global growth is expected to be sustained over the next couple of years—and even accelerate somewhat in emerging market and developing economies (EMDEs) thanks to a rebound in commodity exporters. Although near-term growth could surprise on the upside, the global outlook is still subject to substantial downside risks, including the possibility of financial stress, increased protectionism, and rising geopolitical tensions. Particularly worrying are longer-term risks and challenges associated with subdued productivity and potential growth. With output gaps closing or closed in many countries, supporting aggregate demand with the use of cyclical policies is becoming less of a priority. Focus should now turn to the structural policies needed to boost longer-term productivity and living standards. A combination of improvements in education and health systems; high-quality investment; and labor market, governance, and business climate reforms could yield substantial long-run growth dividends and thus contribute to poverty reduction." 
The report offers considerable detail across countries and regions, for the reader who wants to delve further. But at a time when the global economy is again, at long last, producing near its potential output, it's worth emphasizing that the formula for long-run growth is fairly clear: gains in education and human capital, gains in capital investment, and research and development for gains in technology, all interacting in an economic environment flexible enough to offer meaningful incentives for innovation. The report puts it this way:
"Global productivity growth has slowed over the past two decades. Some of the underlying drivers of this slowdown may fade over time, such as policy uncertainty and crisis legacies. Others, however, are likely to persist: the decline in labor force growth and population aging; a levelling-off of productivity-enhancing innovations in information and communication technologies; and maturing global supply chains. Policies to address these persistent factors include better education for improved learning in aging populations and initiatives to stimulate investment in physical capital and research and development. Other measures, such as regulatory reform and trade liberalization, could raise productivity by reducing informality and increasing competition."
It's also worth remembering that one reason for the current health of the American economy is that  US economic growth is being bolstered and supported by growth from the rest of the world. 

Wednesday, January 17, 2018

Snapshots of Economic Inequality Around the World

Compiling data on economic inequality from countries all around the world is a hefty task, which has been shouldered by a group of more than 100 researchers around the world who contribute to the efforts of the World Inequality Lab and the World Wealth and Income Database. The World Inequality Report 2018, written and coordinated by Facundo Alvaredo, Lucas Chancel, Thomas Piketty, Emmanuel Saez, Gabriel Zucman, provides an overview of their findings. Here are a few of the figures that jumped out at me.

This figure shows the share of income going to the top 10% of the income distribution in a number in some prominent countries and regions. Inequality in the US-Canada area (blue line) is clearly rising, but so is inquality across all of these areas. In particular, economic development in China and India has made some parts of those economies much better-off than others, so inequality his on the rise. The rise of inequality in Russia during the 1990s is also apparent.

This is a similar graph, but with a different set of comparison regions. The blue line for the US-Canada area remains the same. But as you can see, inequality in the Middle East, sub-Saharan Africa, and Brazil have long been above US-Canada levels, and by this measure, India has now passed the US level of inequality.
This figure is known as the "elephant graph," because if you squint a little, you can imagine that the bump on the left is the top edge would trace out the elephant's head, and then the upward movement on the right would be the top edge of the elephant's trunk. As the text explains: "On the horizontal axis, the world population is divided into a hundred groups of equal population size and sorted in ascending order from left to right, according to each group's income level. The Top 1% group is divided into ten groups, the richest of these groups is also divided into ten groups, and the very top group is again divided into ten groups of equal population size. The vertical axis shows the total income growth of an average individual in each group between 1980 and 2016."

For example, the figure shows that an adult who was in the 20th percentile of the world income distribution in 2016 had an income that was about 120% higher than an adult who was in the 20th percentile of the world income distribution in 1980. The "head" of the elephant shows that the gains to those in the 20-40th percentiles of the world income distribution were substantial. The drop in the middle shows that gains were smaller for those from the 50th-80th percentiles of the world income distribution. And on the far right, the top percentile is divided up into smaller slices. The gains for the top percentile were substantial, but comparable to those in the 20th-40th percentile. However, the gains the top 0.01% and the 0.001% were substantially larger. Of course, these groups at the very top are also for much smaller groups, and thus are  harder to measure, and probably also involve more turnover year-to-year.

Underlying these overall patterns are some shifts in regional economic patterns that are fairly well-known, but remain striking. For example, this figure looks at average incomes in Africa and across Asia, and  how they compare to the average world income. In 1950, Africa was well ahead of Asia relative to average world income, but that pattern has dramatically reversed.
As a similar exercise, China lagged far behind Latin America relative to world income back in 1950. But Latin America has underperformed the world economy, and China has outperformed it, and China appears to be on its way to outstripping Latin America in average incomes in the next few years.
This volume is a rich resource, with lots of information on inequality of incomes by country and by region,  inequality of wealth, shifts in public wealth, and other topics. The policy discussion is relatively brief (better education, progressive taxation, rethinking labor institutions), but that was  fine with me. The fundamental point of this exercise is to generate a common fact base, and then let the policy discussion build upon it.

Monday, January 15, 2018

Some Economics for Martin Luther King Day

On November 2, 1983, President Ronald Reagan signed a law establishing a federal holiday for the birthday of Martin Luther King Jr., to be celebrated each year on the third Monday in January. As the legislation that passed Congress said: "such holiday should serve as a time for Americans to reflect on the principles of racial equality and nonviolent social change espoused by Martin Luther King, Jr.." Of course, the case for racial equality stands fundamentally upon principles of justice, not economics. But here are a few economics-related thoughts for the day from the archives:

1) Inequalities of race and gender impose large economic costs on society as a whole, because one consequence of discrimination is that it hinders people in developing and using their talents. In "Equal Opportunity and Economic Growth" (August 20, 2012), I wrote:

A half-century ago, white men dominated the high-skilled occupations in the U.S. economy, while women and minority groups were often barely seen. Unless one holds the antediluvian belief that, say, 95% of all the people who are well-suited to become doctors or lawyers are white men, this situation was an obvious misallocation of social talents. Thus, one might predict that as other groups had more equal opportunities to participate, it would provide a boost to economic growth. Pete Klenow reports the results of some calculations about these connections in "The Allocation of Talent and U.S. Economic Growth," a Policy Brief for the Stanford Institute for Economic Policy Research.

Here's a table that illustrates some of the movement to greater equality of opportunity in the U.S. economy. White men are no longer 85% and more of the managers, doctors, and lawyers, as they were back in 1960. High skill occupation is defined in the table as "lawyers, doctors, engineers, scientists, architects, mathematicians and executives/managers." The share of white men working in these fields is up by about one-fourth. But the share of white women working in these occupations has more than tripled; of black men, more than quadrupled; of black women, more than octupled.

Moreover, wage gaps for those working in the same occupations have diminished as well. "Over the same time frame, wage gaps within occupations narrowed. Whereas working white women earned 58% less on average than white men in the same occupations in 1960, by 2008 they earned 26% less. Black men earned 38% less than white men in the typical occupation in 1960, but had closed the gap to 15% by 2008. For black women the gap fell from 88% in 1960 to 31% in 2008."

Much can be said about the causes behind these changes, but here, I want to focus on the effect on economic growth. For the purposes of developing a back-of-the-envelope estimate, Klenow builds up a model with some of these assumptions: "Each person possesses general ability (common to
all occupations) and ability specific to each occupation (and independent across occupations). All groups (men, women, blacks, whites) have the same distribution of abilities. Each young person knows how much discrimination they would face in any occupation, and the resulting wage they would get in each occupation. When young, people choose an occupation and decide how
much to augment their natural ability by investing in human capital specific to their chosen

With this framework, Klenow can then estimate how much of U.S. growth over the last 50 years or so can be traced to greater equality of opportunity, which encouraged many in women and minority groups who had the underlying ability to view it as worthwhile to make a greater investment in human capital.

"How much of overall growth in income per worker between 1960 and 2008 in the U.S. can be explained by women and African Americans investing more in human capital and working more in high-skill occupations? Our answer is 15% to 20% ... White men arguably lost around 5% of their earnings, as a result, because they moved into lower skilled occupations than they otherwise would have. But their losses were swamped by the income gains reaped by women and blacks."

At least to me, it is remarkable to consider that 1/6 or 1/5 of total U.S. growth in income per worker may be due to greater economic opportunity. In short, reducing discriminatory barriers isn't just about justice and fairness to individuals; it's also about a stronger U.S. economy that makes better use of the underlying talents of all its members.

2) The black-white wage gap--and the share of the gap that is "unexplained"-- is rising, not falling. Here's part of what I wrote about in "Breaking Down the Black-White Wage Gap (September 6, 2017):

Mary C. Daly, Bart Hobijn, and Joseph H. Pedtke set the stage for a more insightful discussion in their short essay, "Disappointing Facts about the Black-White Wage Gap," written as an "Economic Letter" for the Federal Reserve Bank of San Francisco (September 5, 2017, 2017-26). Here are a couple of figures showing the black-white wage gap, and then seeking to explain what share of that gap is associated with differences in state of residence, education, part-time work, industry/occupation, and age. The first figure shows the wage gap for black and white men; the second for black and white women.

Here are some thoughts on these patterns:

1) The black-white wage gap is considerably larger for men (about 25%) than for women (about 15%). Also, the wage gaps seem to have risen since the 1980s.

2) The three biggest factors associated with the wage gap seem to be education level, industry/occupation, and "unexplained."

3) The "unexplained" share is rising over time time. As the authors explain: "Perhaps more troubling is the fact that the growth in this unexplained portion accounts for almost all of the growth in the gaps over time. For example, in 1979 about 8 percentage points of the earnings gap for men was unexplained by readily measurable factors, accounting for over a third of the gap. By 2016, this portion had risen to almost 13 percentage points, just under half of the total earnings gap. A similar pattern holds for black women, who saw the gaps between their wages and those of their white counterparts more than triple over this time to 18 percentage points in 2016, largely due to factors outside of our model. This implies that factors that are harder to measure—such as discrimination, differences in school quality, or differences in career opportunities—are likely to be playing a role in the persistence and widening of these gaps over time." The authors also cite this more detailed research paper with similar findings.

4) In looking at the black-white wage gap for women, it's quite striking that this gap was relatively small back in the 1980s, at only about 5%, and that observable factors like education and industry/occupation explained more than 100% of the wage gap at the time. But as the black-white wage gap for women increased starting in the 1990s, an "unexplained" gap opens up.

5) It is tempting to treat the "unexplained" category as an imperfect but meaningful measure of racial discrimination, but it's wise to be quite cautious about such an interpretation. On one side, the "unexplained" category may overstate discrimination, because it doesn't include other possible variables that affect wages (for example, one could include previous years of lifetime work experience, or length of tenure at a current job, scores on standardized tests, or many other variables). In addition, the variables that are included like level of education are being measured in broad terms, and so it is possible that, say, a blacks and whites with a college education are not the same in their skills and background. On the other side, the "unexplained" category could easily understate the level of discrimination. After all, education levels and industry/occupation outcomes don't happen in a vacuum, but are a result of the income, education, and jobs of family members. For this reason, noting that a wage gap is associated with some different in education or industry/occupation may reflect aspects of social discrimination. The kinds of calculations presented here are useful, but they don't offer final answers.

In short, the black-white wage gap is rising, not falling. The wage gap is also less associated with basic measures like level of education or industry/occupation than it was before. I can hypothesize a number of explanations for this pattern, but none of my hypotheses are cheerful ones.


3) The patterns in which speeding tickets are given for those just a little over the speed limit can  reveal discrimination. I discuss some evidence on this point in "Leniency in Speeding Tickets: Bunching Evidence of Police Bias" (April 5, 2017):

Imagine for a moment the distribution of speed for drivers who are breaking the speed limit. One would expect that a fairly large number of drivers break the speed limit by a small amount, and then a decreasing number of drivers break the speed limit by larger amounts.

But here's the actual distribution of amount over the speed limit on the roughly 1 million tickets given by about 1,300 officers of the Florida Highway Patrol between 2005 and 2015. The graph is fromFelipe Goncalves and Steven Mello, "A Few Bad Apples? Racial Bias in Policing," Princeton University Industrial Relations Section Working Paper #608, March 6, 2017. The left-hand picture shows the distribution of the amount over the speed limit on the speeding ticket given to whites; the right-hand picture shows the distribution the amount over the speed limit on the speeding tickets given to blacks and Hispanics.

Some observations:

1) Very few tickets are given to those driving only a few miles per hour over the speed limit. Then there is an enormous spike in those given tickets for being about 9 mph over the speed limit. There are also smaller spikes at some higher levels. In Florida, the fine for being 10 mph over the limit is substantially higher (at least $50, depending on the county) compared to the fine for being 9 mph over the limit.

2) The jump at 9 mph is sometimes called a "bunching indicator" and it can be a revealing approach in a number of contexts. For example, if being above or below a certain test score makes you eligible for a certain program or job, and one observes bunching at the relevant test score, it's evidence that the test scores are being manipulated. If being above or below a certain income level affects your eligibility for a certain program, or whether you owe a certain tax, and there is bunching at that income level, it's a sign that income is being manipulated. Real-world data is never completely smooth, and always has some bumps. But the spikes in the figure above are telling you something.

3) Goncalves and Mello note that the spike at 9 mph is higher for whites than for blacks and Hispanics. This suggests the likelihood that whites are more likely to catch a break from an officer and get the 9 mph ticket. The research in the paper investigates this hypothesis in some detail ...

In the big picture, one of the reminders from this research is that bias and discrimination doesn't always involve doing something negative. In the modern United States, my suspicion is that some of the most prevalent and hardest-to-spot biases just involve not cutting someone an equal break, or not being quite as willing to offer an opportunity that would otherwise have been offered.


4) Many of the communities that suffer the most from crime are also the communities where the law-abiding and the law-breakers both experience a heavy law enforcement presence, and where large numbers of young men end up being incarcerated. Here are some slices of my discussion from "Inequalities of Crime Victimization and Criminal Justice" (May 20, 2016):

And law-abiding people in some communities, many of them predominantly low-income and African-American, can end up facing an emotionally crucifying choice. One one side, crime rates in their community are high, which is a terrible and sometimes tragic and fatal burden on everyday life. On the other side, they are watching a large share of their community, mainly men, becoming involved with the criminal justice system through fines, probation, fines, or incarceration. Although those who are convicted of crimes are the ones who officially bear the costs, in fact the costs when someone needs to pay fines, or can't earn much or any income, or can only be visited by making a trip to a correctional facility are also shared with families, mothers, and children. Magnus Lofstrom and Steven Raphael explore these questions of "Crime, the Criminal Justice System, and Socioeconomic Inequality" in the Spring 2016 issue of the Journal of Economic Perspectives. ...

It's well-known that rates of violent and property crime have fallen substantially in the US in the last 25 years or so. What is less well-recognized is that the biggest reductions in crime have happened in the often predominantly low-income and African-American communities that were most plagued by crime. Loftrom and Raphael look at crime rates across cities with lower and higher rates of poverty in 1990 and 2008:
"However, the inequality between cities with the highest and lower poverty rates narrows considerably over this 18-year period. Here we observe a narrowing of both the ratio of crime rates as well as the absolute difference. Expressed as a ratio, the 1990 violent crime rate among the cities in the top poverty decile was 15.8 times the rate for the cities in the lowest poverty decile. By 2008, the ratio falls to 11.9. When expressed in levels, in 1990 the violent crime rate in the cities in the upper decile for poverty rates exceeds the violent crime rate in cities in the lowest decile for poverty rates by 1,860 incidents per 100,000. By 2008, the absolute difference in violent crime rates shrinks to 941 per 100,000. We see comparable narrowing in the differences between poorer and less-poor cities in property crime rates. ... "
It remains true that one of the common penalties for being poor in the United States is that you are more likely to live in a neighborhood with a much higher crime rate. But as overall rates of crime have fallen, the inequality of greater vulnerability to crime has diminished.

On the other side of the crime-and-punishment ledger, low-income and African-American men are more likely to end up in the criminal justice system. Lofstrom and Raphael give sources and studies for the statistics: "[N]nearly one-third of black males born in 2001 will serve prison time at some point in their lives. The comparable figure for Hispanic men is 17 percent ... [F]or African-American men born between 1965 and 1969, 20.5 percent had been to prison by 1999. The comparable figures were 30.2 percent for black men without a college degree and approximately 59 percent for black men without a high school degree."

I'm not someone who sympathizes with or romanticizes those who commit crimes. But economics is about tradeoffs, and imposing costs on those who commit crimes has tradeoffs for the rest of society, too. For example, the cost to taxpayers is on the order of $350 billion per year, which in 2010 broke down as "$113 billion on police, $81 billion on corrections, $76 billion in expenditure by various federal agencies, and $84 billion devoted to combating drug trafficking." The question of whether those costs should be higher or lower, or reallocated between these categories, is a worthy one for economists. ... Lofstrom and Raphael conclude:
"Many of the same low-income predominantly African American communities have disproportionately experienced both the welcome reduction in inequality for crime victims and the less-welcome rise in inequality due to changes in criminal justice sanctioning. While it is tempting to consider whether these two changes in inequality can be weighed and balanced against each other, it seems to us that this temptation should be resisted on both theoretical and practical grounds. On theoretical grounds, the case for reducing inequality of any type is always rooted in claims about fairness and justice. In some situations, several different claims about inequality can be combined into a single scale—for example, when such claims can be monetized or measured in terms of income. But the inequality of the suffering of crime victims is fundamentally different from the inequality of disproportionate criminal justice sanctioning, and cannot be compared on the same scale. In practical terms, while higher rates of incarceration and other criminal justice sanctions may have had some effect in reducing crime back in the 1970s and through the 1980s, there is little evidence to believe that the higher rates have caused the reduction in crime in the last two decades. Thus, it is reasonable to pursue multiple policy goals, both seeking additional reductions in crime and in the continuing inequality of crime victimization and simultaneously seeking to reduce inequality of criminal justice sanctioning. If such policies are carried out sensibly, both kinds of inequality can be reduced without a meaningful tradeoff arising between them."

5) An "audit study" of housing discrimination involves finding pairs of people, giving them similar characteristics (job history, income, married/unmarried, parents/not parents) and sending them off to buy or rent a place to live. In "Audit Studies and Housing Discrimination" (September 21, 2016), I wrote in part:

Cityscape magazine, published by the US Department of Housing and Urban Development three times per year, has a nine-paper symposium on "Housing Discrimination Today" in the third issue of 2015. The lead article by Sun Jung Oh and John Yinger asks: "What Have We Learned From Paired Testing in Housing Markets?" (17: 3, pp. 15-59). ...

There have been four large national-level paired testing studies of housing discrimination in the US in the last 40 years. "The largest paired-testing studies in the United States are the Housing Market Practices Survey (HMPS) in 1977 and the three Housing Discrimination Studies (HDS1989, HDS2000, and HDS2012) sponsored by the U.S. Department of Housing and Urban Development (HUD)." Each of the studies were spread over several dozen cities. The first three involved about 3,000-4,000 tests; the 2012 study involved more than 8,000 tests. The appendix also lists another 21 studies done in recent decades.

Overall, the findings from the 2012 study find ongoing discrimination against blacks in rental and sales markets for housing. For Hispanics, there appears to be discrimination in rental markets, but not in sales markets. Here's a chart summarizing a number of findings, which also gives a sense of the kind of information collected in these studies.

However, the extent of housing discrimination in 2012 has diminished from previous national-level studies. Oh and Yinger write (citations omitted): "In 1977, Black homeseekers were frequently denied access to advertised units that were available to equally qualified White homeseekers. For instance, one in three Black renters and one in every five Black homebuyers were told that there were no homes available in 1977. In 2012, however, minority renters or homebuyers who called to inquire about advertised homes or apartments were rarely denied appointments that their White counterparts were able to make.

Friday, January 12, 2018

The Problem of Questionable Patents

The theoretical case for patents is clear enough: if you want people and companies to have an incentive for investing money and time in seeking innovations, you need to offer them some assurance that others won't immediately copy any successful discoveries.  But with the power of patents comes the risk of gaming the patent system and of patents being granted when the proffered invention is either not new, or obvious, or both. Michael D. Frakes and Melissa F. Wasserman tackle these issues in "Decreasing the Patent Office’s Incentives to Grant Invalid Patents" (Hamilton Institute Policy Proposal 2017-17, December 2017). Also, Jay Shambaugh, Ryan Nunn, and Becca Portman offer some useful background information in "Eleven Facts about Innovation and Patents" (Hamilton Project, December 2017).

The Shambaugh, Nunn, and Portman paper offers a few background figures on patents that, as you look, at them, can raise your eyebrows a bit. The background here is that three main patent-granting agencies in the world--the US Patent Office and Trademark Office, the Japanese Patent Office, and the European Patent Office--are sometimes referred to as the Trilateral Patent Offices. The usual belief is that "compared to the USPTO, the JPO and EPO are believed to apply stricter scrutiny to applications." Getting a patent from all three of these offices is called a "triadic" patent, and the number of triadic patents is sometimes used as a measure of quality. Now consider a couple of comparisons.

The number of patent applications in the US had more-or-less doubled since 2000. In that time, the number of patent applications in Japan has dropped by one-quarter, while the number in Europe has risen by about 50%. One possible interpretation of this pattern is that the US economy is the grip of a massive wave of innovation far outstripping Japan and Europe, which may foretell a productivity boom for the US economy. An alternative interpretation is that it's so much easier to apply for a patent in the US, and to have a patent granted, that the US Patent Office is attracting lots of low-quality and invalid patent applications, and some of those are sneaking through the system to receive actual patents.

Here's a figure that poses a similar question. This graph shows the share of GDP spent on research and development on the horizontal axis. The vertical axis is a measure of the number of "high-quality" patents, which in this figure refers to an innovation that is patented in at least two of the three Trilateral Patent Offices. The US level of R&D spending is a bit below that of Germany and Japan, but similar. However, when measured in terms of high-quality patents filed, the US lags well behind. Again, this could be the result that US firms aren't bothering to apply for European and Japanese protection for all their great patents. Or it could be a signal that the rise in US patents includes a greater of low-quality or even invalid patents than those in Japan and Europe.
Frakes and Wasserman lay out how the US Patent Office works in greater detail, in a way that for me sharpens these concerns. For example, they write (citations omitted):
"There is an abundance of anecdotal evidence that patent examiners are given insufficient time to adequately review patent applications. On average, a U.S. patent examiner spends only 19 hours reviewing an application, including reading the application, searching for prior art, comparing the prior art with the application, and (in the case of a rejection) writing a rejection, responding to the patent applicant’s arguments, and often conducting an interview with the applicant’s attorney. Because patent applications are legally presumed to comply with the statutory patentability requirements when filed, the burden of proving unpatentability rests with the Agency. That is, a patent examiner who does not explicitly set forth reasons why the application fails to meet the patentability standards must then grant the patent." 
The US Patent Office is funded by the fees it collects, which fall into several categories, as Frakes and Wasserman explain:
"The overwhelming majority of Patent Office costs are attributed to reviewing and examining applications. To help cover these expenses, the Agency charges examination fees to applicants. These fees fail to cover even half of the Agency’s examination costs, however. To make up for this deficiency, the Agency relies heavily on two additional fees that are collected only in the event that a patent is granted: (1) issuance fees, paid at the time a patent is granted; and (2) renewal fees, paid periodically over the lifetime of an issued patent as a condition of the patent remaining enforceable. Combined with examination fees, these fees account for nearly all of the Patent Office’s revenue. ... In fiscal year 2016 the Patent Office estimated that the average cost of examining a patent application was about $4,200 . The examination fee that year was set at only $1,600 for large for-profit corporations; at $800 for individuals, small firms, nonprofit corporations, or other enterprises that qualify for small-entity status; and at $400 for individuals, small firms, nonprofit corporations, or other enterprises that qualify for micro-entity status."
An obvious concern is that if the US Patent Office relies heavily on fees that are collected only after a patent is granted, then there is an obvious incentive to grand more patents. Indeed, they cite studies to show that when the Patent Office is facing financial troubles, it tends to grant more patents. 

An additional concern is that the US Patent Office doesn't really reject patents, at least not permanently, because you can apply repeatedly. "Considering that about 40 percent of the applications filed in fiscal year 2016 are repeat applications (up from 11 percent in 1980), a substantial percentage of the Patent Office’s backlog can be attributed to its inability to definitively reject applications." To put it another way, an applicant for a patent can just keep applying until it gets assigned to a less-experienced examiner during a budget crunch, and improve your odds that it will eventually be granted.

With these thoughts in mind, Frakes and Wasserman offer some practical solutions, which include: 1) increase patent examination fees and abolish "issuance" fees, to reduce the financial incentive to grant patents; 2) limit repeat applications, perhaps by charging higher fees; 3) give patent examiners more time (and charge higher fees to support that additional time as needed).

But the key economic insight between these proposals and others is that in a economy whose future is based on innovation and technology, the danger of granting a substantial number of patents which should not have been allowed has important costs. As Frakes and Wasserman write: 
"Although patents encourage innovation by helping inventors to recoup their research and development expenses, this comes at a cost—consumers pay higher prices and have less access to the patented invention. Although society can accept such consequences for a properly issued patent, an invalid patent imposes these costs on society without providing the commensurate benefits from additional innovation because, by definition, an invalid patent is one issued for an existing technology or an obvious technological advancement. Invalid patents provide no innovative benefit to society because the public already possessed the patented inventions.
"In addition to this harm, erroneously issued patents can stunt innovation and competition. Competitors might forgo research and development in areas covered by improperly issued patents to minimize the risk of expensive and time-consuming  litigation. There is growing empirical evidence that invalid patents can increase so-called patent thickets—dense webs of overlapping patent rights—that in turn raise the cost of licensing and complicate business planning. Because a firm needs a license to all of the patents that cover its products, other firms can use questionable patents to opportunistically extract licensing fees. There is mounting evidence that nonpracticing entities—commonly known as patent trolls—use patents of questionable validity to assert frivolous lawsuits and extract licensing revenue from innovative firms. Invalid patents can also undermine the business relations of market entrants because customers might be deterred from transacting with a company out of fear of a contributory patent infringement suit. Finally, erroneously issued patents can inhibit the ability of start-ups to obtain venture capital, especially if a dominant player in the market holds the patent in question."
For some other thoughts on the economics of patents, the interested reader might check: 

Thursday, January 11, 2018

Does Retirement Raise the Risk of Death?

"Social Security eligibility begins at age 62, and approximately one third of Americans immediately claim at that age. We examine whether age 62 is associated with a discontinuous change in aggregate mortality, a key measure of population health. Using mortality data that covers the entire U.S. population and includes exact dates of birth and death, we document a robust two percent increase in male mortality immediately after age 62. The change in female mortality is smaller and imprecisely estimated. Additional analysis suggests that the increase in male mortality is connected to retirement from the labor force and associated lifestyle changes."
This is a technical research paper that will only be accessible to the initiate, but you can get a good flavor of the results from a couple of figures. A first figure shows the patterns of claiming Social Security. There are various rules about the age at which different kinds of benefits can be claimed. For example Social Security disability benefits can be claimed earlier, but access to what most people think of as the usual Social Security Benefits starts at 62. The figure shows the step-change or discontinuity in number of people retiring at 62, followed by a slower rise in claiming benefits and a smaller jump at age 65.  
There's also a step-change discontinuity in the death rate at age 62--but only for men, not for women. Mortality rates increase with age. As this figure shows, the mortality rate for women before and after age 62 is close to a smooth line. But for men, there's a jump. 
The authors dig into data on behavioral and economic patterns to see if they can find an underlying reason for this difference. Proving cause-and-effect here is very difficult, as the authors admit, but some patterns do emerge in the data. For example, there doesn't seem to be a gender gap in how income or health insurance coverage shifts at age 62. However, one difference is that men are more likely than women to stop working for pay when they start claiming Social Security. Men are more likely to start or increase smoking (even if they have never smoked before) and to become more sedentary. Men at age 62 also see a rise in deaths due to chronic obstructive pulmonary disease, lung cancer, and traffic accidents.

In short, if you are retiring, take up a habit other than smoking. And if you know someone who is retiring, invite them for a walk and give them a hug. 

Wednesday, January 10, 2018

Measuring the "Free" Digital Economy

The digital economy provides a number of services for which the marginal price (given an internet connection) is zero: games like Candy Crush, email, web searches, access to information and entertainment, and many more. Because users are not paying an additional price for using these services, this form of economic output doesn't seem to be captured by conventional economic statistics. Leonard Nakamura, Jon Samuels , and Rachel Soloveichik offer some ways of thinking about the question in in "Measuring the `Free' Digital Economy within theGDP and Productivity Accounts, written for the Economic Statistics Centre of Excellence, an independent UK research center funded by Britain's Office of National Statistics (December 2017, ESCoE Discussion Paper 2017-3).

Essentially, they propose that the economic value of "free" content can be measured by the marketing and advertising revenue that it generates. In other words, you "pay" for "free" content not with money, but by selling a slice of your attention to advertising. Thus, their approach is a practical application of the saying: "If you're not paying for it, you're the product." They write:
”Free” digital content is pervasive. Yet, unlike the majority of output produced by the private business sector, many facets of the digital economy (e.g., Google, Facebook, Candy Crush) are provided without a market transaction between the final user of the content and the producer of the content. ... Furthermore, because these technologies are so pervasive and have induced large changes in consumer behavior and business practice, these open questions have evolved into arguments that the exclusion of these technologies from the national accounts leads to a significant downward bias in official estimates of growth and productivity.
The first contribution of this paper is to provide an argument that, yes, it is possible to measure many aspects of the ”free” digital economy via the lens of a production account. ... To be clear at the outset, this approach does not provide a willingness to pay or welfare valuation of the “free” content. But this approach does provide an estimate of the value of the content that is consistent with national accounting estimates of production.
We model the provision of “free” content as a barter transaction. Consumers and businesses receive content in exchange for exposure to advertising or marketing. Our approach reduces to treating the provision of the ”free” digital content as payment in kind for viewership services produced by households and businesses. Put differently, the national accounts currently ignore the role of households in the production of advertising and marketing. In our methodology, households are active producers of viewership services that they barter for consumer entertainment. ...
We focus on two types of ”free” content: advertising‐supported media and marketing‐supported information. Advertising‐supported media includes digital content like Google search, but also more traditional content like print media and broadcast television. Marketing‐supported information includes digital content like so‐called freemium games for smartphones or recipes from, but also more traditional content like print newsletters and audiovisual marketing. Conceptually, the barter transaction between the producer and user of “free” information is nearly identical to that with advertising‐supported media. The main difference is that advertising viewership is almost exclusively ”purchased” by media companies from the general public and then resold to outside companies. In  contrast, the marketing viewership that is exchanged for “free” information is generally ”purchased” by nonmedia companies from potential customers and used in‐house.
A number of interesting insights emerge from this approach. Here's a figure showing total US advertising spending over time as a share of GDP. Total advertising revenue has been fairly stable over time, with the abrupt fall in print advertising being mostly offset by a rise in digital advertising. 

This figure shows total expenditures on marketing over time as a share of GDP. In this case, spending on print marketing has declined, but because of rising expenditures on digital marketing, total spending on marketing has risen by more than 1% of GDP in the last 20 years. 

Overall, measuring the value of the "free" digital economy has relatively little effect on output or trends in total factor productivity (TFP)_. They write (citations and footnotes omitted):
"We are particularly interested in the analysis of “free” digital content beginning in 1995 because that year has been previously identified as an inflection point in the production of information technology (IT) equipment. Moreover, that is when the Internet emerged as a significant source of ”free” content. We calculate that, from 1995 to 2014, our experimental methodology applied to digital content annually raises nominal GDP growth by 0.036 percentage point, real GDP growth by 0.089 percentage point, and TFP growth 0.048 percentage point. The growth of digital content is partially offset by a decrease in ”free” print content like newspapers. From 1995 to 2014, all “free” content
categories together annually raise nominal GDP growth by 0.033 percentage point, raise real GDP growth by 0.080 percentage point, and raise TFP growth by 0.073 percentage point.  ...  These revised numbers slightly ameliorate the recent slowdown in economic growth—but not nearly enough to reverse the slowdown."
This analysis seems broadly sensible and correct to me: for a previous argument along similar lines, see  "How Well Does GDP Measure the Digital Economy?" (July 19, 2016). But it comes with a warning that applies to all discussions of economic output, and is recognized repeatedly by the authors here.

GDP is measured by the monetary value of what is bought and sold, but it doesn't measure consumer welfare (or "happiness" or "utility") in a direct way. Thus, it's possible that even if the gains to GDP from including "free" digital services are relatively small, perhaps those small gains are increasing consumer welfare and happiness by a much larger amount. Of course, one can make a similar argument that the monetary value of certain other outputs, from broadcast television back in the 1960s and 1970s, or the availability of aspirin, is a lot less than the consumer welfare generated by these products. Measuring "the economy" is an exercise of adding up sales receipts, while thinking about benefits and costs of economic patterns (as has long been recognized) is a much broader exercise. 

Tuesday, January 9, 2018

The State of Play with Carbon Capture and Storage

Carbon capture and storage technology isn't likely to be the silver bullet that slays climate change  by itself. But it may well be a necessary and meaningful part of the package of policy responses. Akshat Rathi has written a series of readable articles for Quartz magazine (listed here) that give a useful sense of the state of the technology in this area, and its real-but-limited potential.

For example, in an overview article on December 4, 2017, "Humanity’s fight against climate change is failing. One technology can change that," Rathi notes that before starting a year of research and writing on the topic, he was skeptical of carbon capture and storage could be cost-effective. However, he also notes that this technology may both be necessary and possible.

On the issue of necessity, Rathi writes: "The foremost authority on the matter, the Intergovernmental Panel on Climate Change, has modeled hundreds of possible futures to find economically optimal paths to achieving these goals, which require the world to bring emissions down to zero by around 2060. In virtually every IPCC model, carbon capture is absolutely essential—no matter what else we do to mitigate climate change." (Rathi and David Yanovsky offer an interactive game to drive home this point here.)

On the issue of possibility, the evidence is scattered, and still more at the proof-of-concept stage than at a full-fledged and ongoing industry. But some of these fledgling projects are intriguing. For example, Rathi discusses an operation in Iceland (discussed in more detail in an earlier article) which issues geothermal heat to capture carbon dioxide and inject it underground in a location where it combines with minerals to form solid rock--an operation which is an overall net subtraction of carbon from the atmosphere. Rathi notes:
"Since 2014, the plant has been extracting heat from underground, capturing the carbon dioxide released in the process, mixing it with water, and injecting it back down beneath the earth, about 700 meters (2,300 ft) deep. The carbon dioxide in the water reacts with the minerals at that depth to form rock, where it stays trapped. ... In other words, Hellisheidi is now a zero-emissions plant that turns a greenhouse gas to stone. ... Critics laughed at those pursuing a moonshot in “direct-air capture” only a decade ago. Now Climeworks is one of three startups—along with Carbon Engineering in Canada and Global Thermostat in the US—to have shown the technology is feasible. The Hellisheidi carbon-sucking machine is the second Climeworks has installed in 2017. If it continues to find the money, the startup hopes its installations will capture as much as 1% of annual global emissions by 2025, sequestering about 400 million metric tons of carbon dioxide per year."
In another article, Rathi discusses a plant which generates electricity from natural gas near Houston, in  "A radical startup has invented the world’s first zero-emissions fossil-fuel power plant" (December 5, 2017). The process involves using "supercritical" carbon dioxide, under high termperatures and pressures. He writes: 
"In the end, the Allam cycle is only slightly more efficient than typical combined-cycle systems. But it has the major added benefit of capturing all potential carbon dioxide emissions essentially for free. ... Beyond the greenhouse-gas effect, carbon dioxide has some fascinating properties. At high pressure and temperature, for instance, it enters a state of matter where it’s neither a gas nor a liquid but has properties of both. It’s called a “supercritical fluid.” If you’ve ever had decaf coffee, you’ve likely been an unwitting customer of supercritical carbon dioxide, which is often used to extract caffeine from coffee beans with minimal changes to the taste."
China, which leads the countries of the world in carbon emissions, has been experimenting with carbon capture and storage, without yet making a strong commitment to the technology, as Rathi explains here (and a 2015 report from the Asia Development Bank discusses here).

There are a variety of new projects and possible innovations either to capture carbon from emissions at lower cost, or to turn carbon dioxide into solids like soda ash, and other approaches. Carbon capture and storage isn't yet a proven large-scale technology, but it's a promising one.

For some previous posts on this topic, with links to various reports and articles, see: