Monday, August 21, 2017

The Next Inflection Point for Aging and Social Security

It was apparent back in the 1970s that something needed to be done about Social Security. A presidential commission under Alan Greenspan proposed a combination of phased-in tax increases and benefit cuts that were enacted into law in 1983. The changes were sufficient so that the date at which the Social Security trust fund would run out of money was moved back from 1983 (!) into the 2030s.  My purpose here is neither to praise nor to critique that report and legislation from more than three decades ago. It is to point out that the 2030s are a lot more visible in the headlights now than they were back in 1983, and we need one more round of legislation to assure that Social Security will remain solvent.

A few figures from the  the 2017 Annual Report of the Board of Trustees of the Federal Old-Age and Survivors Insurance and Federal Disability Insurance Trust Funds  (released July 13, 2017), tell much of the underlying story. The figure shows expenditures and costs. One result of the legislation following the Greenspan commission report was that for several decades Social Security was collecting more in taxes than it paid out in benefits, with the surpluses being invested in US Treasury debt and deposited in a trust fund. But back in 2009, the trust fund started to diminish, instead of to increase. Up through 2034, on the current projections, annual benefits paid out can be larger than the annual income of the system, because of the trust fund. But in 2034, the trust fund runs out, and the annual income of the system would be 77% benefits that have been promised.

Underlying this fiscal shift are the demographic patterns that affect the number of working-age people and retirees in the US economy. This figure shows the ratio of the number of workers paying into Social Security divided by the number of beneficiaries. Back in the 1980s, 1990s, and early 2000s, the ratio was more than 3:1. But one of the most predictable (and commonly predicted!) demographic shifts in US history started happening around 2010. The front end of the baby boomer generation, those born from 1946 through the early 1960s, turned 65 years of age in 2010. The ratio of workers paying in divided by beneficiaries falls from well above 3 as late as 2007 to nearly 2 by 2030. In the timelines that commonly apply to demographic shifts, this is a very shift in a relativley short time.
But along with the obvious bad news for the finances of the Social Security system, there's an aspect of good news here as well. The big demographic shift of the retiring boomers will be pretty much over by the mid-2030s. As the first figure shows, costs are 77% of projected income in 2030, and 73% of projected income in 2091--much the same. As the second figure shows, the ratio of contributing workers to beneficiaries falls to roughly 2:1, but then stays sags only a bit more by the end of the 21st century. Given that the overall fiscal and demographic balance of the system doesn't change much after about 2035, it is technically possible to put into place one big fix of changes that would bring the system into fiscal balance (minor future tweaks excepted) for the rest of the 21st century.

What would it take to close the gap? Conveniently, the Social Security Administration publishes a "Summary of Provisions that Would Change the Social Security System" (most recent version, February 28, 2017). It lists dozens of proposals, many of them just variations on the timing and size of the change, and for each proposal, it lists what percentage of the long-range short-fall (that is, over the next 75 years) would be eliminated by this specific change. It's not really kosher just to add together the estimates about individual proposals to get an estimate of the combining several of the proposals, because the changes would overlap or interact in certain ways that could diminish or increase their effects. But for back-of-the-envelope estimates--and just getting a sense of what changes would matter a lot and what changes wouldn't matter so much--no great harm is done by listing some specific estimates. Without pre-judging the merits, here's a selection of some possible changes.

Under current law, Social Security benefits are adjusted upward each year according to the rise in the Consumer Price Index. For example, if instead the cost-of-living adjustment was 0.5 percentage points lower each year (starting in December 2017), this step would eliminates 34% of the long-run actuarial imbalance.

Under current law, Social Security benefits are calculates by looking at average earnings over the highest 35 years of earnings, which are expressed as a statistic called AIME, or "average indexed monthly earnings." Then a formula is applied to AIME to calculate the PIA, or "primary insurance amount," which is the basis for benefits paid.  A higher AIME generally leads to a higher PIA, but the relationship isn't a simple proportion; instead, there's an element of redistribution in which those with lower AIME get a higher share of their income replaced by Social Security benefits. One could adjust how AIME is calculated, or adjust the PIA formula in many ways.

For example, one proposal is to "increase the number of years used to calculate benefits for retirees and survivors (but not for disabled workers) from 35 to 40, phased in over the years 2017-2025," which would address 17% of the long-range actuarial imbalance. However, there are lots of ways of tweaking the PIA formula. For example, one can tweak the formula so there is a benefit cut only for those with higher AIME, and so that the change doesn't start right away. For example, one such proposal would "create a new bend point at the 30th percentile of the AIME distribution of newly retired workers. Maintain current-law benefits for earners at the 30th percentile and below. Reduce the 32 and 15 percent factors above the 30th percentile such that the initial benefit for a worker with AIME equal to the taxable maximum is reduced by 1.21 percent per year as compared to current law (for the years that progressive indexing applies)." This step would eliminate 53% of the the long-run actuarial imbalance.

Under current law, there is a "normal retirement age" for Social Security, although retirees have an option to retire earlier and receive lower benefits, or to retire later and receive higher benefits. The Greenspan commission recommendations, have already been phasing in a later "normal retirement age," and as life expectancies continue to rise, this change could be extended. For example, one proposal would be that "after the normal retirement age (NRA) reaches 67 for those age 62 in 2022, increase the NRA 2 months per year until it reaches 69 for individuals attaining age 62 in 2034. Thereafter, increase the NRA 1 month every 2 years." This change would eliminate 40% of the actuarial imbalance.

Social Security is funded by a payroll tax of 12.4% on earnings up to $127,200. Thus, it would be possible either to raise the tax rate, or to raise the income ceiling, or both.  If one was to increase the payroll tax "by 0.1 percentage point each year from 2022-2041, until the rate reaches 14.4 percent in 2041 and later," that step eliminates 55% of the long-run actuarial imbalance. Alternatively, say that we "eliminate the taxable maximum in years 2027 and later. Phase in elimination by taxing all earnings above the current-law taxable maximum at: 1.24 percent in 2018, 2.48 percent in 2019, and so on, up to 12.40 percent in 2027. Provide benefit credit for earnings above the current-law taxable maximum, adding a bend point at the current-law taxable maximum and applying a formula factor of 5 percent for AIME [average indexed monthly earnings] above this new bend point." This step alone would address 72% of the long-run actuarial imbalance.

An alternative proposal in this category would be to apply Social Security taxes to the value of employer-provided health insurance,which is currently an untaxed fringe benefit for some workers --but not others. A sample proposal sounds like this: "Expand covered earnings to include employer and employee premiums for employer-sponsored group health insurance (ESI). Starting in 2020, phase out the OASDI payroll tax exclusion for ESI premiums. Set an exclusion level at the 75th percentile of premium distribution in 2020, with amounts above that subject to the payroll tax. Reduce the exclusion level each year by 10 percent of the 2020 exclusion level until fully eliminated in 2030. Eliminate the excise tax on ESI premiums scheduled to begin in 2020." This step would eliminate 33% of the long-run actuarial imbalance.

As long as we are tinkering with tax and benefit rates, some steps might be taken to help those with the lowest incomes who depend most heavily on Social Security. For example, there are a variety of proposals for raising the minimum benefit, which would tend to worsen the long-run actuarial imbalance by about 5%. A related idea is to offer an increase in minimum benefits for the very old, who have been receiving Social Security for at least 20 years, on the ground that this group is especially likely to be in need. Another option would "increase the earliest eligibility age (EEA) by two months per year for those age 62 starting in 2018 and ending in 2035 (EEA reaches 65 for those age 62 in 2035)," so that people who have economic or health-related reasons to claim benefits earlier could do so--although the level of benefits would be reduced for those who took this option. One could also tinker with the formulas to provide a boost for elderly workers who remain active in the workforce. For example, one proposal would eliminate Social Security taxes for those who have worked more than 180 quarters (that is, 45 years).

Thus, if thinking of an overall big deal on Social Security, it makes sense to take steps that would address, say, 120% of the long-run actuarial imbalance, and then "spend" 20% of that amount on targeted increases in Social Security benefits for the most needy.

At least for geeks like me, innocent as a rose of real-world political machinations, it appears that a political opportunity lurks here! What if the remaining practical politicians in one party or the other could hammer out a plausible bill to fix Social Security for the 21st century, which would offer some  mixture of all these policies.

Of course,  whatever party wants to take this on would need to keep its own extremists in check. In my experience, Republican extremists on this issue sometimes insist on trying to address the entire problem with benefit cuts, or on turning this into a fight over whether to privatize Social Security. Meanwhile, Democratic extremists on this issue sometimes want to disregard the projections of the Social Security actuaries, argue that the problem isn't real, and then offer large and broad increases in benefits. I'm not saying it's an easy problem--although compared to North Korea or rising health care costs, I don't think Social Security is an especially hard problem. And think of the potential political gains from being the party that got its act together and saved Social Security for the 21st century!


Friday, August 18, 2017

Blockchain: New Frontiers

Blockchain is a technology that offers reliable by decentralized record-keeping. The best-known applications of "blockchain" technology are still the alternative currencies, of which Bitcoin remains most prominent. But it looks more and more as if the main near-term expansions of blockchain technology are not going to be about currencies, but instead relate to other kinds of ownership, transactions, and record-keeping.  A couple of recent studies emphasizing this theme are "How blockchain technology could change our lives," written by Philip Boucher, Susana Nascimento, and Mihalis Kritikos for the European Parliamentary Research Service (February 2017), and "Blockchain and Economic Development: Hype vs. Reality," written by Michael Pisa and Matt Juden for the Center for Global Development (CGD Policy Paper #107, July 2017).

Both papers offer a verbal and intuitive sketch of how the blockchain technology works. Here's a taste of the explanation from Boucher, Nascimento and Kritikos:
"Blockchain offers the same record-keeping functionality but without a centralised architecture. The question is how it can be certain that a transaction is legitimate when there is no central authority to check it. Blockchains solve this problem by decentralising the ledger, so that each user holds a copy of it. Anyone can request that any transaction be added to the blockchain, but transactions are only accepted if all the users agree that it is legitimate, e.g. that the request comes from the authorised person, that the house seller has not already sold the house, and the buyer has not already spent the money. This checking is done reliably and automatically on behalf of each user, creating a very fast and secure ledger system that is remarkably tamper-proof. Each new transaction to be recorded is bundled together with other new transactions into a 'block', which is added as the latest link on a long 'chain' of historic transactions. This chain forms the blockchain ledger that is held by all users. ..."
Thus, anyone can download the blockchain of all transactions. But who has an incentive to update and check the blockchain? Blockchain technology relies on "miners" to do this job. Miners need to spend computing resources to solve a complicated algorithm before they can add a block of transactions to the blockchain, and they are paid either by users of blockchain services or by the system itself. Again, Boucher, Nascimento and Kritikos explain:
"This work is called 'mining'. Anybody can become a miner and compete to be the first to solve the complex mathematical problem of creating a valid encrypted block of transactions to add to the blockchain. There are various means of incentivising people to do this work. Most often, the first miner to create a valid block and add it to the chain is rewarded with the sum of fees for its transactions. Fees are currently around €0.10 per transaction, but blocks are added regularly and contain thousands of transactions. Miners may also receive new currency that is created and put into circulation as an inflation mechanism.
"Adding a new block to the chain means updating the ledger that is held by all users. Users only accept a new block when it has been verified that all of its transactions are valid. If a discrepancy is found, the block is rejected. Otherwise, the block is added and will remain there as a permanent public record. No user can remove it. While destroying or corrupting a traditional ledger requires an attack on the middleman, doing so with a blockchain requires an attack on every copy of the ledger simultaneously. There can be no 'fake ledger' because all users have their own genuine version to check against. Trust and control in blockchain-based transactions is not centralised and black-boxed, but decentralised and transparent. These blockchains are described as 'permissionless', because there is no special authority that can deny permission to participate in the checking and adding of transactions." 
When blockchain is used for Bitcoin, the blockchain records the ownership of each bitcoin, and when each bitcoin is transferred to another user. But the users themselves remain (although sufficiently motivated law enforcement can sometimes find a way in). Bitcoin has been in the news lately because it has been experiencing a price spike. Here's the price of bitcoin in US dollars from Google Finance website. As you can see, the price of a bitcoin first soared above $1,000 back in 2013, sagged, slowly moved back to about $1,000, but then in the last few months has soared to above $4,000.



This recent spike,while it certainly gladdens the heart of those who already hold bitcoins, is actually part of the reason why bitcoin is not an especially good currency. Useful currencies are relatively stable in value! In most modern economies, traditional currencies typically allow transactions that are already relatively fast, secure,and cheap. For most people, it's not clear how they would benefit from using bitcoin for transaction purposes. Pisa and Juden explain (footnotes and citations omitted):
To usurp the role of national currencies, bitcoin would first need to fulfill some (though perhaps not all) of the core functions that money provides, including serving as a medium of exchange, a unit of account, and a store of value. Currently, bitcoin does none of these things very well: its extreme volatility prevents it from being a good store of value and unit of account, and retailers and consumers—who appear satisfied with the cost/benefit tradeoffs associated with using credit cards—have not accepted the currency widely enough to consider it a reliable medium of exchange. National governments also present an obstacle:  currently, no government allows taxes to be paid with bitcoin, which reduces the incentives for individuals and companies to use it.
"Even if national governments choose not to resist broader usage of bitcoin, there are questions about the technology’s ability to scale due to the speed of the network. Currently, the Bitcoin blockchain can process a maximum of seven transactions per second. To put this in context, Visa processes an average of 2,000 transactions per second and has a peak capacity of 56,000 transactions per second. Increasing the speed of the Bitcoin network could be accomplished through increasing block size. This is technically feasible, but some network participants have resisted it, since it would increase the cost of mining bitcoin and give more control to larger entities, leading to greater centralization of the network. Finally, there are concerns about the energy intensity of mining. Although estimates vary widely, some indicate that bitcoin mining could consume 14,000 megawatts of electricity by 2020, which is comparable to Denmark’s total energy consumption."
But although bitcoin and virtual currencies may not be likely to take over the money supply anytime soon, the blockchain technology can be adapted for a considerable array of other purposes. Here are some suggestions about these other purposes.

Ownership of Digital Media (as explained by Boucher, Nascimento, and Kritikos)
"When consumers purchase books and discs, they come to own physical artefacts that they can later sell, give away or leave as part of their inheritance. There are limitations to their rights, for example they should not distribute copies, and should pay royalties if they broadcast the content. In buying the digital equivalent of this same media, consumers know they will not gain ownership of a physical artefact, but many do not realise that they do not gain ownership of any content either. Rather, they enter into a licensing agreement which is valid for either a period of time or a fixed number of plays. These licences cannot be sold, given away or even left as part of an inheritance. Building a collection of legitimately-owned digital music, literature, games and films often comes at a cost similar to that of a collection of various discs and books with the same content. It is a substantial lifelong investment but one that cannot be transferred and that expires on death. While older generations might take pleasure in reliving the tastes and experiences of loved ones via the boxes of vinyl, books and games they left behind, today's children may not enjoy the same access to their parent's digital content. Could blockchain technology help resolve these and other problems with digital media? ... 
"The blockchain could be used to register all sales, loans, donations and other such transfers of individual digital artefacts. All transactions are witnessed and agreed by all users. Just like transactions in a bank account or land registry, artefacts cannot be transferred unless they are legitimately owned. Buyers can verify that they are purchasing legitimate copies of MP3s and video files. Indeed, the transaction history allows anyone to verify that the various transfers of ownership lead all the way back to the original owner, that is, the creator of the work. The concept could be combined with smart contracts so that access to content can be lent to others for fixed periods before being automatically returned, or so that inheritance wishes could be implemented automatically upon registration of a death certificate. ... Using blockchain technology in this way could for the first time enable consumers to buy and sell digital copies second hand, give them away or donate them to charity shops, lend them to friends temporarily or leave them as part of an inheritance – just as they used to with vinyl and books – while ensuring that they are not propagating multiple unlicensed copies."
Management of Global Supply Chains (as explained by Boucher, Nascimento, and Kritikos)
"Blockchain-based applications have the potential to improve supply chains by providing infrastructure for registering, certifying and tracking at a low cost goods being transferred between often distant parties, who are connected via a supply chain but do not necessarily trust each other. All goods are uniquely identified via 'tokens' and can then be transferred via the blockchain, with each transaction verified and time-stamped in an encrypted but transparent process. This gives the relevant parties access whether they are suppliers, vendors, transporters or buyers. The terms of every transaction remain irrevocable and immutable, open to inspection to everyone or to authorised auditors. Smart contracts could also be deployed to automatically execute payments and other procedures.

"Several companies, innovators and incumbents are already testing blockchain for record-keeping in their supply chains. Everledger enables companies and buyers to track the provenance of diamonds from mines to jewellery stores and to combat insurance or documentation fraud. For each diamond, Everledger measures 40 attributes such as cut and clarity, the number of degrees in pavilion angles and place of origin. They generate a serial number for each diamond, inscribed microscopically, and then they add this digital ID to Everledger's blockchain (currently numbering 280 000 diamonds). This makes it possible to establish and maintain complete ownership histories, which can help counteract fraud and support police and insurance investigators tracking stolen gems. It also allows consumers to make more informed purchasing decisions, e.g. to limit their search to diamonds with a 'clean' history that is free from fraud, theft, forced labour and the intervention of dubious vendors who are linked to violence, drugs or arms trafficking. ...

Wal-Mart, the world's largest retailer, is trialling Blockchain for food safety. It is expected that a Blockchain-based accurate and updated record can help to identify the product, shipment and vendor, for instance when an outbreak happens, and in this way get the details on how and where food was grown and who inspected it. An accurate record could also make their supply chain more efficient when it comes to delivering food to stores faster and reducing spoilage and waste.
International Financial Transactions (as explained by Pisa and Juden)
"The cost and inefficiency associated with making international payments across certain corridors present a barrier to economic development. Whether it is a business making an investment in a developing country, an emigrant sending money back home, or an aid organization funding a project abroad, moving resources from rich to poorer countries ultimately requires money to be sent across borders. ... [C]onducting  these transactions through the formal financial system can involve considerable cost and delay. Cross-border payments are inefficient because there is no single global payment infrastructure through which they can travel. Instead, international payments must pass
through a series of bilateral correspondent bank relationships, in which banks hold accounts at other banks in other countries. The number of such relationships that a bank is willing to maintain is limited by the cost of funding these accounts as well as the risk of conducting financial transactions with banks who lack strong controls to prevent illicit transactions ... 
"One consequence of the fragmented global payments system is the high cost of remittances, which are an enormously important source of development financing. Roughly $430 billion of remittances were sent to developing countries in 2016, nearly three times as much as  official aid. The global average cost of sending remittances worth $200 is 7.4 percent but varies greatly  across corridors: for example, the average cost of sending $200 from a developed country to South Asia is 5.4 percent, while the cost of sending the same value to sub-Saharan Africa is 9.8 percent (World Bank 2017).  ...
Small and medium-sized businesses face similar costs when conducting cross-border payments. Industry surveys suggest that approximately two-thirds of cross-border businesses are unhappy with the delays and fees associated with using traditional bank transfers for sending international payments ...
"Using a bitcoin-based company to send remittances to countries that have deep bitcoin exchange markets can be cheaper than using traditional MTOs. For example, sending a $200 remittance from the United States to the Philippines with Rebit.ph currently costs 3 percent, while World Remit, an established MTO that relies on the traditional system of bank wires, charges 3.5 percent. However, in most corridors, bitcoin-based remittance companies have  not been able to offer fees that are substantially lower than traditional players. As a result, many have closed, while others have shifted to emphasizing business-to-business payments ..."  
Public record-keeping and land registries (from both sets of authors)

Boucher, Nascimento, and Kritikos write:
"The most immediate applications of blockchain technology in public administrations are in record keeping. The combination of time-stamping with digital signatures on an accessible ledger is expected to deliver benefits for all users, enabling them to conduct transactions and create records (e.g. for land registries, birth certificates and business licences) with less dependence upon lawyers, notaries, government officials and other third parties. ...
"The Estonian government has experimented with blockchain implementations enabling citizens to use their ID cards to order medical prescriptions, vote, bank, apply for benefits, register their businesses, pay taxes and access approximately 3 000 other digital services. The approach also enables civil servants to encrypt documents, review and approve permits, contracts and applications and submit information requests to other services. This is an example of a permissioned blockchain, where some access is restricted in order to secure data and protect users' privacy. ... 
"Several countries including Ghana, Kenya and Nigeria have begun to use blockchains to manage land registries. Their aim is to create a clear and trustworthy record of ownership, in response to problems with registration, corruption and poor levels of public access to records. Sweden is also conducting tests to put real estate transactions on blockchain, in this case to allow all parties (banks, government, brokers, buyers and sellers) to track the progress of the transaction deal in all its stages and to guarantee the authenticity and transparency of the process while making considerable time and cost savings.
"The Department for Work and Pensions in the UK have also trialled the use of blockchain technology for welfare payments. Here, citizens use their phones to receive and spend their benefit payments and, with their consent, their transactions are recorded on a distributed ledger. The aim of the initiative is to help people manage their finances and create a more secure and efficient welfare system, preventing fraud and enhancing trust between claimants and the government. The UK government is also considering how blockchain technology could enable citizens to track the allocation and spending of funds from the government, donors or aid organisations to the actual recipients, in the form of grants, loans and scholarships."
Pisa and Juden write:
"The idea of storing land titles on a blockchain has obvious appeal. Most importantly, sharing a land registry across a distributed network greatly enhances its security by eliminating “single point of failure” risk and making it more difficult to tamper with records. It could also increase transparency by allowing certified actors (including, potentially, auditors or mon-profit organizations) to monitor changes made to the registry on a near real-time basis, and enhance efficiency by reducing the time and money associated with registering property. ...
"A blockchain cannot, however, address problems related to the reliability of records. This is an obvious point but one that is often overlooked. As noted earlier, the blockchain is a “garbage in, garbage out” system: if a government uploads a false deed to a blockchain (either out of carelessness or deceit), it will remain false. This suggests that using the technology to store land records works best in places where the existing system for recording land titles is already strong. This was certainly the case in Georgia, which initiated a project with The Bitfury Group and the Blockchain Trust Accelerator in 2016 to register land titles on a blockchain. ... Bitfury’s pilot project in Georgia has reportedly been a success. By February 2017, NAPR had registered more than 100,000 documents and the Georgian government announced a new agreement with Bitfury to expand the use of blockchain technology to other government departments. The question now is whether this success can be replicated in less favorable environments. Bitfury will face this challenge in Ukraine where it recently reached agreement with the Ukrainian government to put all its electronic records (not just land titles) onto a blockchain."
Private and Validated Proof of Identity (as explained by Pisa and Juden, citations and footnotes omitted)

A number of countries have recently enacted digital identification systems for their citizens, including most notably India, but also Estonia, Pakistan, Peru, and Thailand. However, these are not blockchain systems, but rather a combination of ID numbers, biometric markers (like fingerprints or iris scans), and cryptography (where a person needs to know a private code). Governments are not likely to outsource the identification of their citizens to blockchain technology. The question is whether it might be useful to use blockchain to provide a private proof of identification that people might use for other purposes, alongside their government ID, while having greater control over their private information. The authors explain:
"Because of the weaknesses of centralized and federated ID solutions, and the belief that people should have greater control over their own personal data and the value derived from it, some ID experts have turned their focus to developing “user-centric” or “self-sovereign” systems. These systems aim to shift control to individuals by allowing them to “store their own identity data on their own devices, and provide it efficiently to those who need to validate it, without relying on a central repository of identity data.” Until recently such a solution seemed technically infeasible, but blockchain technology appears to make it possible.
"Several benefits arise from storing certified attributes on a blockchain. The first is privacy: Alice can control both who she shares her personal information with and how much information she shares. The second is security, as the absence of a centralized database eliminates single point of failure risk. The system is also more convenient, since it allows users to provide verified information with the touch of a button rather than having to access and submit a wide variety of documents. Finally, a blockchain provides an easy and accurate way to trace the evolution of ID attributes since each change is time-stamped and appended to the record preceding it.
"The idea of a self-sovereign ID system based on blockchain is close to becoming a reality. For example, SecureKey and IBM are now piloting a digital ID system in Canada using the Linux Foundation’s open-source Hyperledger Fabric blockchain. The project connects the Canadian government (including national and provincial government agencies) with the country’s largest banks and telecoms on a permissioned blockchain network. These participating companies and agencies play a dual role of certifying users’ attributes and providing digital services. The project is expected to go live in late 2017, at  which time Canadian consumers will be able to opt into the network to access a variety of egovernment and financial services by sharing verified attributes stored on a mobile phone."
Transparency and Coordination of Financial Aid (as described by Pisa and Juden)
"An example of the first model is an application called Stoneblock developed by the company Neocapita. Still in an early stage of development, the platform will allow actors along the development supply chain (including donors, recipients, implementing partners, and auditors) to simultaneously track information about how a project is progressing and the flow of funding. The company is also exploring the use of smart contracts that would trigger disbursement of funds tied to performance metrics. In most cases, human observers would report metrics onto a blockchain (e.g., reporting the number of children attending a school) but in others, electronic meters could play the same role (e.g., measuring the amount of water produced by a well). By allowing all participants on the network to view the same information at the same time, using a blockchain to share project data could dramatically reduce administrative overhead. Storing records on a blockchain would also make them essentially tamper-proof, thereby reducing the potential for misappropriation."
These papers include other possible applications: blockchain-enabled records of when a patent application occurred; blockchain-enabled voting; "smart contracts," which might involve provisions for payments related to in loans, insurance payments, or wills that can be automatically carried out when prespecified dates or conditions occur; and even talk of setting up "decentralized autonomous organizations" on blockchain that would own assets and could carry out a set of contractual commitments with humans, firms, and other autonomous organizations. The alternative currencies like bitcoin get the headlines, but my guess is that these alternative frontiers for the application of blockchain technology are going to be considerably more important very soon -- if they aren't more important already.

For a couple of earlier posts on blockchain and Bitcoin, see:


Wednesday, August 16, 2017

Autonomous Cars: Altering One in Nine Jobs

It seems clear that driverless vehicles are coming, although the timeline for their arrival remains unclear. David Beede, Regina Powers and Cassandra Ingram of the Economics and Statistics Administration at the US Department of Commerce look at one aspect, "The Employment Impact of Autonomous Vehicles," in ESA Issue Brief #05-17 (August 11, 2017). They set the stage this way: 

"In September 2016, the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) published policy guidelines for AVs [autonomous vehicles], recognizing their potential as “the greatest personal transportation revolution since the popularization of the personal automobile nearly a century ago” (NHTSA 2016). ... The worldwide number of advanced driver-assistance systems (ADAS), such as backup cameras and adaptive cruise control, increased from 90 million to 140 million units between 2014 and 2016. Consumers have indicated a willingness to pay $500-$2,500 per vehicle for ADAS. Sensor technologies are rapidly advancing to provide sophisticated information to vehicle operating systems about the surrounding environment, such as road conditions and the location of other nearby vehicles. However, slower progress has been made in developing software that can mimic human driver decision-making, so that fully autonomous vehicles may not be introduced for another ten or more years ..."  
Autonomous vehicles could lead to sweeping changes in personal mobility, car ownership, parking arrangements, traffic congestion, road safety, and more. I ran through some of the main effects in an earlier post on "Driverless Cars" (October 31, 2012).

The focus of Beede, Powers, and Ingram is on jobs that involve a substantial amount of driving. They write:
"In 2015, 15.5 million U.S. workers were employed in occupations that could be affected (to varying degrees) by the introduction of automated vehicles. This represents about one in nine workers. We divide these occupations into “motor vehicle operators” and “other on-the-job drivers.” Motor vehicle operators are occupations for which driving vehicles to transport persons and goods is a primary activity, are more likely to be displaced by AVs [autonomous vehicles] than other driving-related occupations. In 2015, there were 3.8 million workers in these occupations. These workers were predominately male, older, less educated, and compensated less than the typical worker. Motor vehicle operator jobs are most concentrated in the transportation and warehousing sector. Other on-the-job drivers use roadway motor vehicles to deliver services or to travel to work sites, such as first responders, construction trades, repair and installation, and personal home care aides. In 2015, there were 11.7 million workers in these occupations and they are mostly concentrated in construction, administrative and waste management, health care, and government. Other-on-the-job drivers may be more likely to benefit from greater productivity and better working conditions offered by AVs than motor vehicle operator occupations." 
When they break down these jobs by industry, I was interested to note that "government" is the area where the greatest number of jobs will potentially be affected by driverless cars. This suggests that certain might play a leading role in offering examples of how driverless vehicles could work. Or not!
Many of those whose jobs would be affected by autonomous vehicles are likely to push back. When tallying up the costs and benefits, it's worth noting that those who spend a lot of time driving are actually in relatively hazardous jobs, because of the risk of motor vehicle accidents. "[T]he fatality rate (per 100,000 full-time equivalent workers) for motor vehicle operators from on-the-job roadway incidents involving motor vehicles is ten times the rate for all workers, and the numbers of roadway motor vehicle occupational injuries resulting in lost work time per 100,000 full-time equivalent workers is 8.7 times as large as that of all workers."

Any innovation which directly affects the jobs of about one-ninth of all US workers has the potential to be a dislocating shock of some force. Some  types of workers who spend a good portion of every day in a vehicle will have a harder time adjusting to the change; for others, autonomous vehicles may come as a relief, by freeing them up to focus on other parts of their job. The authors note: 
"Workers in motor vehicle operator jobs are older, less educated, and for the most part have fewer transferable skills than other workers, especially the kinds of skills required for non-routine cognitive tasks. ... [I]n contrast to the workers in the occupations we classify as motor vehicle operators, other on-the-job drivers, of which there are about triple the number of motor vehicle operators, have a more diversified set of work activities, knowledge, and skills. For this group, although driving is an important work activity, it is only one of many important work activities, many of which already require the kinds of non-routine cognitive skills that are becoming increasingly in demand in our economy. Such workers are likely to be able to adapt to the widespread adoption of AVs."

Tuesday, August 15, 2017

Adam Smith: The Plight of the Impartial Spectator in Times of Faction

Adam Smith's first great book, the Theory of Moral Sentiments, relies heavily in places on the idea of an "impartial spectator." Smith's notion is that our beliefs about morality are closely related to our notion of how a hypothetical "impartial spectator" would react to a given situation. (Here's a quick overview of Smith's argument.) But what happens to a person trying to think like an impartial spectator--that is, a person trying to preserve the integrity of their own personal judgment--in a time of faction?  Smith argues that anyone trying to act in this way is likely to be marginalized by all competing factions.

Here is Smith's comment from the 1759  Theory of Moral Sentiments (Book III, Ch. 1, paragraph 85). As is my wont, I quote here from the online version of the book at the Library of Economics and Liberty website. Smith wrote: 
"The animosity of hostile factions, whether civil or ecclesiastical, is often still more furious than that of hostile nations; and their conduct towards one another is often still more atrocious. ...  In a nation distracted by faction, there are, no doubt, always a few, though commonly but a very few, who preserve their judgment untainted by the general contagion. They seldom amount to more than, here and there, a solitary individual, without any influence, excluded, by his own candour, from the confidence of either party, and who, though he may be one of the wisest, is necessarily, upon that very account, one of the most insignificant men in the society. All such people are held in contempt and derision, frequently in detestation, by the furious zealots of both parties. A true party-man hates and despises candour; and, in reality, there is no vice which could so effectually disqualify him for the trade of a party-man as that single virtue. The real, revered, and impartial spectator, therefore, is, upon no occasion, at a greater distance than amidst the violence and rage of contending parties. To them, it may be said, that such a spectator scarce exists any where in the universe. Even to the great Judge of the universe, they impute all their own prejudices, and often view that Divine Being as animated by all their own vindictive and implacable passions. Of all the corrupters of moral sentiments, therefore, faction and fanaticism have always been by far the greatest."

Monday, August 14, 2017

Misallocation and Productivity: International Perspective

In pretty much every industry in pretty much every country, the firms exhibit a range of productivity: that is, some well-run and efficient firms produce more output given their levels of labor and capital, while others produce less. What ought to happen in a well-functioning economy is that the lagging-productivity firms should either be catching up with the leading-productivity firms over time, or the laggard firms should shrink in size while the leading firms grow in size. This process has been demonstrably important to economic growth in the past.

However, a wide range of taxes, rules, and institutions may act to inhibit reallocation of resources, and thus to slow down productivity growth. For example, if smaller farms are less efficient that larger farms, but the land use rules in a developing economy keep farm sizes small, then agricultural resources will not be reallocated. Economists refer to this as an issue of "misallocation."

Diego Restuccia and Richard Rogerson discuss "The Causes and Costs of Misallocation" in the Summer 2017 issue of the Journal of Economic Perspectives (31:3: pp. 151-174). The IMF discusses the role of tax policy in creating and sustaining misallocation  in the April 2017 IMF Fiscal Monitor, with an overall theme of "Achieving More with Less." The discussion of reallocation is in "Chapter 2: Upgrading the Tax System to Boost Productivity." The IMF researchers write:
"Resource misallocation manifests itself in a wide dispersion in productivity levels across firms, even within narrowly defined industries. High dispersion in firm productivities reveals that some businesses in each country have managed to achieve high levels of efficiency, possibly close to those of the world frontier in that industry. This implies that existing conditions within a country are compatible with higher levels of productivity. Therefore, countries can reap substantial TFP [total factor productivity] gains from reducing resource misallocation, allowing firms to catch up with the high-productivity firms in their own economies. In some cases, however, the least productive businesses will need to exit the market, releasing resources for the more productive ones. For example, Baily, Hulten, and Campbell (1992) find that 50 percent of manufacturing productivity growth in the United States during the 1980s can be attributed to the reallocation of factors across plants and to firm entry and exit. Similarly, Barnett and others (2014) find that labor reallocation across firms explained 48 percent of labor productivity growth for most sectors in the U.K. economy in the five years prior to 2007.
Resource misallocation is often the result of a large number of poorly designed economic policies and market failures that prevent the expansion of efficient firms and promote the survival of inefficient ones. Reducing misallocation is therefore a complex and multidimensional task that requires the use of all policy levers. Structural reforms play a crucial role, in particular because the opportunity cost of poorly designed economic policies is much greater now in the context of anemic productivity growth. Financial, labor, and product market reforms have been identified as important contributors (see Banerjee and Duflo 2005; Andrews and Cingano 2014; Gamberoni, Giordano, and Lopez-Garcia 2016; and Lashitew 2016). This chapter makes the case that upgrading the tax system is also key to boosting productivity by reducing distortions that prevent resources from going to where they are most productive. ... 
Potential TFP gains from reducing resource misallocation are substantial and could lift the annual real GDP growth rate by roughly 1 percentage point. Payoffs are higher for emerging market and low-income developing countries than for advanced economies, with considerable variation across countries. ...
Many emerging market economies have a relatively small number of leading firms, and a large number of laggards. If the distribution of leaders and laggards in these markets became more equal, similar to the distribution between leaders and laggard firms in US industries, the productivity gains could be large. By the IMF calculations, total factor productivity "would increase by 30 to 50 percent in China and by 40 to 60 percent in India."

In their JEP essay, Restuccia and Rogerson  provide a useful overview of what can cause misallocation, and how economists have sought to measure the potential gains from reducing misallocation. For a flavor of the issues and analysis, here are a few of the studies mentioned in the paper:
"Government regulation can also hinder the reallocation of individuals across space. Hsieh and Moretti (2015) study misallocation of individuals across 220 US metropolitan areas from 1964 to 2009. They document a doubling in the dispersion of wages across US cities during the sample period. Using a model of spatial reallocation, they show that the increase in wage dispersion across US cities represents a misallocation that contributed to a loss in aggregate GDP per capita of 13.5 percent. They argue that across-city labor misallocation is directly related to housing regulations and the associated constraints on housing supply. ..."
"Tombe and Zhu (2015) provide direct evidence on the frictions of labor (and goods) mobility across space and sectors in China and quantify the role of these internal frictions and their changes over time on aggregate productivity. The reduction of internal migration frictions is key and together with internal trade restrictions account for about half of the growth in China between 2000 and 2005. ..."
"Restuccia and Santaeulalia-Llopis (2017) study misallocation across household farms in Malawi. They have data on the physical quantity of outputs and inputs as well as measures of transitory shocks and so are able to measure farm-level total factor productivity. They find that the allocation of inputs is relatively constant across farms despite large differences in measured total factor productivity, suggesting a large amount of misallocation. In fact, they found that aggregate agricultural output would increase by a remarkable factor of 3.6 if inputs were allocated efficiently. Their analysis also suggests that institutional factors that affect land allocation are likely playing a key role. Specifically, they compare misallocation within groups of farmers that are differentially influenced by restrictive land markets. Whereas most farmers in Malawi operate a given allocation of land, other farmers have access to marketed land (in most cases through informal rentals). Using this source of variation, Restuccia and Santaeulalia-Llopis find that misallocation is much larger for the group of farmers without access to marketed land: specifically, the potential output gains from removing misallocation are 2.6 times larger in this group relative to the gains for the group of farms with marketed land." 
There will always be leading and lagging firms, and various hindrances to reallocation of resources across places and firms. In that sense, misallocation is never going away. But studying misallocation offers a useful reminder that productivity growth and economic growth are driven (or not!) by the dynamic forces of competition in a reasonably flexible economic setting.

Moreover, a better understanding of the gaps between leading and lagging companies--why suchh gaps persist, what might help to close them-- may even help to explain one of the really big questions in the global economy, which is the overall slowdown in productivity growth. A 2015 study done by the OECD found that productivity growth among leading companies in various industries has not been slowing down; instead, the gap between leading and lagging companies has expanded, as if lagging companies are having a harder time keeping pace.  

Friday, August 11, 2017

NAFTA in a Multipolar World Economy

Discussions of globalization often seem rooted in an assumption that the main choices, either for the US or for other countries, are either nationalism or global. But there is another possibility, which is that the world economy evolves to a "multipolar" setting, which is based on primarily regional agglomerations of cross-national trade. In this situation, the issue for the US economy is whether it will be part of its geographically natural multipolar group, here in the Americas, or whether it will try view itself as a group of one, competing in the global economy with multipolar groups in Europe and in Asia. Your attitude toward the North American Free Trade Agreement, for example, may vary according to whether you see it as one of many trade deals in a globalizing economy, or whether you see it as the specific trade deal for building a US-centered trading bloc in a multipolar economy.

Michael O’Sullivan and Krithika Subramanian lay out the case for the multipolarity hypothesis in "Getting Over Globalization," written as a report for Credit Suisse Research (January 2017). They write:
"Globalization is running out of steam. We can see this in various ways. Our measure for tracking globalization – made up of flows of trade, finance, services and people – has ebbed in the past year, and has slipped backwards over the course of the past three years so that it has dropped below the levels reached in 2012–2013 to about the same level as crisis-ridden 2009–2010 ... . Perhaps the most basic representation of globalization is trade, and this is sluggish or according to many measures it is plateauing. ... Other indicators of globalization paint a more negative picture – cross-border flows of financial assets (relative to GDP) have continued downward from their pre-financial-crisis peak, most likely because of the effects of regulation and the general shrinking of the banking sector. Trade liberalization, as measured by the Fraser Institute’s economic freedom of the world indices, has been slowly declining since its peak in 2000, although it is still at a relatively healthy level. ... It should be said that the extent of globalization/multipolarity is still at a historically high level, although it is hard not to have the impression that it is on the verge of a downward correction, especially once we consider some of the underlying dynamics. ...

"One of the notable sub-trends of globalization has been a much better distribution of the world’s economic output, led by what were once regarded as overly populous, third world countries such as India and China. This has fueled multipolarity – the rise of regions that are now distinct in terms of their economic size, political power, approaches to democracy and liberty, and their cultural norms. ...
We believe that the world is now leaving globalization behind it and moving to a more distinct multipolar setting. ...

"The ... scenario is based on the rise of Asia and a stabilization of the Eurozone so that the world economy rests, broadly speaking, on three pillars – the Americas, Europe and Asia (led by China). In detail, we would expect to see the development of new world or regional institutions that surpass the likes of the World Bank, the rise of “managed democracy” and more regionalized versions of the rule of law – migration becomes more regional and more urban rather than cross-border, regional financial centers develop and banking and finance develop in new ways. At the corporate level, the significant change would be the rise of regional champions, which in many cases would supplant multinationals. We would also expect to see uneven improvements in human development leading to more stable, wealthier local economies on the back of a continuation of the emerging market consumer trend. ...

"An interesting and intuitive way of seeing how the world has evolved from a unipolar one (i.e. USA) to a more multipolar one is to look at the location of the world’s 100 tallest buildings. The construction of skyscrapers (200 meters plus in height) is a nice way of measuring hubris and economic machismo, in our opinion. Between 1930 and 1970, at least 90% of the world’s tallest buildings could be found in the USA, with a few exceptions in South America and Europe. In the 1980s and 1990s, the USA continued to dominate the tallest tower league tables, but by the 2000s there was a radical change, with Middle Eastern and Asian skyscrapers rising up. Today about 50% of the world’s tallest buildings are in Asia, with another 30% in the Middle East, and a meager 16% in the USA, together with a handful in Europe. In more detail, three-quarters of all skyscraper completions in 2015 were located in Asia (China and Indonesia principally), followed by the UAE and Russia. Panama had more skyscraper completions than the USA."
If one believes that the US should view its economy as part of an emerging American bloc in a multipolar world economy, the North American Trade Agreement between the US, Canada, and Mexico is the foundation for that bloc. C. Fred Bergsten and Monica de Bolle have edited an e-book titled A Path Forward for NAFTA, a collection of 11 short essays discussing NAFTA "modernization," "renegotiation," and "updating" from various national, industry, and foreign policy perspectives (Peterson Institute for International Economics Briefing 17-2, July 2017).  They give some sense of the possibilities for cooperation and agreement, and the unlikeliness that such an agreement will address bilateral trade deficits, in the "Overview" essay:
"The overarching goal of negotiators from the three participating countries must be to boost the competitiveness of North America as a whole, liberalizing and reforming commercial relations between the three partner countries and responding to the many changes in the world economy since NAFTA went into effect in 1994. These changes include the digital transformation of commerce, which has enabled sophisticated new production methods employing elaborate supply chains, transforming North America into a trinational manufacturing and services hub. But concerns about labor, the environment, climate change and energy resources, and currency issues have become more acute than they were at the time NAFTA started. Commerce Secretary Wilbur Ross was thus correct when he said that NAFTA “didn’t really address our economy or theirs [Mexico and Canada] in the way they are today.” ...

"The broadest consistent goal shared by the NAFTA countries should be to strengthen the international position of North America as a whole in a world of tough competition from China and others. Beyond that objective, the negotiators can take steps toward achieving regional energy independence, since all three countries are large consumers and producers of different kinds of energy, from those based on fossil fuels to those derived from new technologies and renewable sources. There is also plenty of room for additional or indeed full liberalization of key sectors, such as financial services and telecommunications, to the benefit of all three economies.

"The new NAFTA could borrow some of the TPP’s innovative approaches and embrace cutting-edge standards for issues such as e-commerce, state-owned enterprises (SOEs), and other sectors that have become central to international trade and investment. The North American partners might be able to help resolve a politically inflammatory issue plaguing trade agreements worldwide: incorporating dispute settlement mechanisms that will make their provisions enforceable and thus credible without being perceived as undermining national sovereignty and widely shared concepts of fairness. Another step in this direction would be to work out a North American competition policy that would enable the three countries to disavow the use of antidumping and countervailing duties against each other, as Australia and New Zealand have done. The NAFTA partners might also strive to achieve a degree of regulatory coherence that has so far eluded the United States and the European Union in their efforts to forge a transatlantic agreement. NAFTA negotiators could permit like-minded countries, notably the members of the Pacific Alliance (Chile, Colombia, and Peru, as well as Mexico), all of which are already free trade agreement partners of the United States, to join NAFTA. ...

"[T]rade agreements are inappropriate and ineffective vehicles for attempting to reduce trade imbalances. The reason is that external imbalances are created by internal macroeconomic imbalances and can be remedied only by changes in the latter. Hence continued US insistence on cutting its trade deficit, especially via bilateral efforts with Mexico, would almost surely lead to dissatisfaction with the outcome and a potential blowup of the entire agreement. Taking the concern about trade deficits at face value, moreover, is a prescription for deadlock with Canada and Mexico, both of which run global trade and current account deficits on the same order of magnitude as the United States. Hence they properly view themselves as deficit countries that need to strengthen, not weaken, their external economic positions. They are most unlikely to accede to US demands to strengthen its external position at their expense, even if the economics were to make that possible, and can in fact be expected to argue (correctly) that the three North American deficit countries should work together to improve their joint and several external positions with the rest of the world.
Those interested in NAFTA and the possibility of an emerging multipolar world economy might wish to check some earlier posts:

Thursday, August 10, 2017

The US Fiscal Outlook

Pretty much everyone agrees that the US fiscal outlook for the long-run--a few decades into the future--looks grim unless changes are made. Here's are estimates of the ratio of accumulated federal debt/GDP throughout US history, and projected up through 2050, from a Congressional Budget Office Report in March 2017. The spikes of government debt during wartime, the Great Depression, and the Reagan and Obama administrations are clear. The trajectory forecast would take US government debt outside past experience.


Alan J. Auerbach and William G. Gale set the stage for the discussion that needs to  happen in "The Fiscal Outlook In a Period of Policy Uncertainty," written for the Tax Policy Center (August 7, 2017). Douglas W. Elmendorf and Louise M. Sheiner also tackle these issues in "Federal Budget Policy with an Aging Population and Persistently Low Interest Rates," in the Summer 2017 issue of the Journal of Economic Perspectives (31: 3, pp. 175-194).

Auerbach and Gale summarize their theme straightforwardly:

"Budget deficits appear manageable in the short run, but the nation’s debt-GDP ratio is already high relative to historical norms, and even under optimistic assumptions,  both measures will rise in the future. Sustained deficits and rising federal debt will crowd out  future investment, reduce prospects for economic growth, and impose burdens on future generations. ...
"For example, we find that just to ensure that the debt-GDP ratio in 2047 does not exceed the current level would require a combination of immediate and permanent spending cuts and/or tax increases totaling 3.2 percent of GDP. This represents about a 16 percent cut in non-interest spending or a 19 percent increase in tax revenues relative to current levels. To return the debt-GDP ratio in 2047 to 36 percent, its average in the 50 years preceding the Great Recession in 2007-9, would require immediate and permanent spending cuts or tax increases of 4.6 percent of GDP. The longer policy makers wait to institute fiscal adjustments, the larger those adjustments would have to be to reach a given debt-GDP ratio target in a given year. While the numbers above are projections, not predictions, they nonetheless constitute the fiscal backdrop against which potentially ambitious new tax and spending proposals should be considered." 
Auerbach and Gale go through a variety of ways of projecting future deficits, but the overall message that there is a long-run reason for concern keeps coming through. They also offer a useful reminder that even if the proximate cause of the higher federal debt burden is projections for higher spending on entitlement and especially health programs, there are a number of cases in addressing a problem by reversing its cause doesn't make sense. As I sometimes say, "When someone is hit by a car, you don't fix their injury by reversing the cause--that is, backing up the car over their body." As Auerbach and Gale write:
"Looking toward policy solutions, it is useful to emphasize that even if the main driver of long-term fiscal imbalances is the growth of entitlement benefits, this does not mean that the only solutions are some combination of benefit cuts now and benefit cuts in the future. For example, when budget surpluses began to emerge in the late 1990s, President Clinton devised a plan to use the funds to “Save Social Security First.” Without judging the merits of that particular plan, our point is that Clinton recognized that Social Security faced long-term shortfalls and, rather than ignoring those shortfalls, aimed to address the problem in a way that went beyond simply cutting benefits. A more general point is that addressing entitlement funding imbalances can be justified precisely because one wants to preserve and enhance the programs, not just because one might want to reduce the size of the programs. Likewise, addressing these imbalances may involve reforming the structure of other spending, raising or restructuring revenues, or creating new programs, as well as simply cutting existing benefits. Nor do spending cuts or tax changes need to be across the board. Policy makers should make choices among programs. For example, more investment in infrastructure or children’s programs could be provided, even in the context of overall spending reductions." 
Elmendorf and Sheiner tackle a different aspect of the same question. They agree that federal deficits are on "an unsustainable path." However, they also note that interest rates are very low, which offers an opportunity for federal borrowing aimed at infrastructure and long-run investments. They write:
"Both market readings and detailed analyses by a number of researchers suggest that Treasury interest rates are likely to remain well below their historical norms for years to come, which represents a sea change for budget policy. We argue that many—though not all—of the factors that may be contributing to the historically low level of interest rates imply that both federal debt and federal investment should be substantially larger than they would be otherwise. We conclude that, although significant policy changes to reduce federal budget deficits ultimately will be needed, they do not have to be implemented right away. Instead, the focus of federal budget policy over the coming decade should be to increase federal investment while enacting changes in federal spending and taxes that will reduce deficits gradually over time."
As a focused argument in economic reasoning, Elmendorf and Sheiner make a strong case. As a matter of political economy, it's trickier, because one can raise at least three questions.

1) If the US political system decides not to focus on deficit-reduction now, is it capable of focusing the additional spending on investment that will raise long-run growth, or will the additional budget flexibility just lead to more transfer payments?

2) If the US political system doesn't focus on deficit reduction in the near-term, then in the medium-term roughly a decade from now, it will need to preside over even greater budget changes (as Auerbach and Gale explain) to avoid the outcome that everyone agrees is unsustainable. It would be a hard political U-turn to shift from larger deficits for investment in the present, to taking budgetary steps in the fugure that will offset that additional borrowing, and more besides, in the future.

3) The theoretical case for enacting changes now that will have the effect of holding down the increase in deficits in the long-run is strong. But in practical terms, just what these changes should be is less clear. For example, Congress could pass a law which places limits on, say, the level of government health care spending from 2030 to 2040, but there's no reason to believe that those limits will have any actual force when those years arrive. There are a few changes, like phasing in an older retirement age or a change in benefit formulas for Social Security, that might have a better chance of persisting. It seems useful to think more about about budgetary policies that could be enacted in the present, but would have most of their effect after a long-term phase-in, and would be relatively resistant to future political tinkering.

It's not exactly news that democratic political systems are continually enticed to focus on the present and push costs into the future in a wide range of contexts: public borrowing, pensions, environment, and others.

Wednesday, August 9, 2017

William Playfair: Inventor of the Bar Graph, Line Graph, and Pie Chart

William Playfair (1759-1823) wasn't sure himself whether he had actually invented the bar graph and the line graph. So after he had published The Commercial and Political Atlas in 1786, he kept an eye out for other examples. After 13 years of looking, but not finding any predecessors, he declared himself to be the inventor in his 1798 book Lineal Arithmetic, where he wrote (pp. 6-7):
"I confess I was very anxious to find out if I was actually the first who applied the principles of geometry to matters of finance, as it had long before been applied to chronology with great success. I am now satisfied, upon due enquiry, that I was the first; for during 11 years I have never been able to learn that anything of a similar nature had ever before been produced.

"To those who have studied geography, or any branch of mathematics, these charts will be perfectly intelligible. To such, however, as have not, a short explanation may be necessary.

"The advantage proposed by those charts, is not that of giving a more accurate statement than by figures, but it is to give a more simple and permanent idea of the gradual progress and comparative amounts, at different periods, by presenting to the eye a figure, the proportions of which correspond with the amount of the sums intended to be expressed.

"As the eye is the best judge of proportion, being able to estimate it with more quickness and accuracy than any of our other organs, it follows that wherever relative quantities are in question, a gradual increase or decrease of any revenue, receipt or expenditure of money, or other value, is to be stated, this mode of representing it is peculiarly applicable; it gives a simple, accurate, and permanent idea, by giving form and shape to a number of separate ideas, which are otherwise abstract and unconnected. In a numerical table there are as many distinct ideas given, and to be remembered, as there are sums, the order and progression of those sums, therefore, are also to be recollected by another effort of memory, while this mode unites proportion, progression, and quantity all under one simple impression of vision, and consequently one act of memory. "
Cara Giaimo provides an overview of Playfair's story in "The Scottish Scoundrel Who Changed How We See Data: When he wasn’t blackmailing lords and being sued for libel, William Playfair invented the pie chart, the bar graph, and the line graph," appearing in Atlas Obscura (June 28, 2016). Giaimo described Playfair as a "near-criminal rascal." He apprenticed with James Watt, of steam engine fame, failed at silversmithing, falsely claimed to have invented the semaphore telegraphy, tried blackmailing a Scottish lord, sold tracts of American land he didn't actually own to French nobility, and died in poverty and obscurity. For some additional detail on Playfair's colorful life, Giaimo links to a 1997 article, "Who Was Playfair?" by Ian Spence and Howard Wainer.

But for social scientists, what's interesting is that Playfair pushed back against the style of argument of his time--mainly verbal persuasion and perhaps a few tables--and invented these graphs. For example, here's the first bar chart, showing Scotland's trading partners.



Here's an early line graph from Playfair's 1786 atlas, showing England's imports and exports to Denmark & Norway in the 18th century.

And Playfair wasn't done. In his 1801 book The Statistical Breviary, he invented the pie chart. It appears in the middle of a group of other circular charts, and shows Turkish land holdings. Moreover, Playfair  hand-colored the "slices" of the pie, thus initiating the idea of color-coding. Here's the overall page, followed by a close-up of the pie chart itself.

The first pie chart, drawn among other circular charts by Playfair in 1801, and illustrating the Turkish Empire's land holdings. A closeup of the pie is available here.


I suspect that the invention of the line graph, bar graph, and pie chart were--like so many inventions--something that would have been invented during this time frame by someone, and sooner rather than later. But Playfair was first, and deserves the credit.

Homage: I ran across the Giamo article thanks to Tyler Cowen and the always-intriguing Marginal Revolution blog.

Tuesday, August 8, 2017

Negative Interest Rates: Evidence and Practicalities

Seven central banks around the world have lowered the interest rate that they use to implement monetary policy to a negative rate: along with the very prominent European Central Bank and Bank of Japan, the others include the central banks of Bulgaria, Denmark, Hungary, Sweden, and Switzerland. How is this working out? When (not if) the next recession  hits, are negative interest rates a tool that might be used by the US Federal Reserve? The IMF has issued a staff report on "Negative Interest Rate Policies--Initial Experiences and Assessments" (August 2017). In the Summer 2017 issue of the Journal of Economic PerspectivesKenneth Rogoff explores the arguments for negative interest rates (as opposed to other policy options) and practical methods of moving toward such a policy in "Dealing with Monetary Paralysis at the Zero Bound" (31:3, pp. 46-77).

When (and not if) the next recession comes, monetary policy is likely to face a hard problem. For most of the last few decades, the standard response of central banks during a recession has been to reduce the policy interest rate under their control by 4-5 percentage points. For example this is how the US Federal Reserve cut it interest rates in response to the recessions that started in 1990, 2001, and 2007.

The problem is that when (not if) the next recession hits, reducing interest rates in this traditional way will not be practical. As you an see, the policy interest rates has crept up to about 1%, but that's not high enough to allow for an interest rate cut of 4-5% without running into the "zero lower bound."

The problem of the zero lower bound seems unlikely to go away. Nominal interest rate can be divided up into the amount that reflects inflation, and the remaining "real" interest rate--and both are low. Inflation has been rock-bottom now for about 20 years, even as the economy has moved up and down, leading even Fed chair Janet Yellen to propose that economists need to study "What determines inflation?" Real interest rates have been falling, and seem likely to remain low.  The Fed is slowly raising its federal funds interest rate, but there is no current prospect that it will move back to the range of, say, 4- 5% or more. Thus, when (not if) the next recession hits, it will be impossible to use standard monetary tools to cut that interest rate by the usual 4-5 percentage points.

What macroeconomic policy tools will the government have when (not if) the next recession hits. Fiscal policy tools like cutting taxes or raising spending remain possible, although with the Congressional Budget Office forecasting a future of government debt rising to unsustainable levels during the next few decades, this tool may need to be used with care.  Hitting the zero bound is why the Fed and other central banks turned to "quantitative easing," where the central bank buys assets like government or private debt, although this raises obvious problems of what assets to buy, how much of these assets to buy--and the likelihood of political intervention in these decisions.

Thus, some central banks have taken their policy interest rates into negative territory. As the figure shows, the Bank of Denmark went negative in 2012, while a number of others did so in 2014 and 2015.

There are a number of concerns with negative interest rates. Will negative interest rates be transmitted through the economy in a similar way to traditional reductions in interest rates? Will negative interest rates weaken the banking sector? What sort of financial innovations might happen as investors seek to avoid being affected by negative rates? The IMF staff report argues that so far, the evidence is reasonably positive:
"There is some evidence of a decline in loan and bond rates following the implementation of  NIRPs [negative interest rate policies]. Banks’ profit margins have remained mostly unchanged. And there have not been significant shifts to physical cash. That said, deeper cuts are likely to entail diminishing returns, as interest rates reach their “true” lower bound (at which point agents shift into cash holdings). And pressure on banks may prove greater; especially in systems with larger shares of indexed loans and where banks compete more directly with bond markets and non-bank credit providers. ... On balance, the limits to NIRPs point to the need to rely more on fiscal policy, structural reforms, and financial sector policies to stimulate aggregate demand, safeguard financial stability, and strengthen monetary policy transmission."
For those who instinctively recoil from the notion of a negative interest rate, it's perhaps useful to remember that it has occurred quite often in recent decades. Any time someone is locked into paying or receiving a fixed rate of interest, and then sees inflation move up, a negative interest rate results. Thus, back in the 1970s and early 1980s, lots of Americans were receiving negative interest rates if they had money in bank accounts or Treasury bonds, and were paying negative interest rates if they already had a fixed-rate mortgage. In short, the innovation here isn't that real inflation-adjusted interest rates can be negative, but rather that a  nominal interest rate is negative.

It's also worth remembering that this policy interest rate is related to the everyday interest rates that people and firms pay and receive, but it's not the same. The interest rates for borrowers, for example, are also affected by underlying factors like risk and collateral. In short, negative policy interest rates does mean downward pressure on interest rates, but it doesn't mean that the credit card company is going to be paying you if you charge more on your credit card, or that negative interest will start eating away your home mortgage.

Thus, the existing evidence on negative interest rates to this point show that having the policy interest rate be a few tenths of a percent in the negative is possible, and can be sustained for several years. It doesn't show in a direct way how banks, households, and the economy would react if negative nominal interest rates became larger and widespread through the economy.

An obvious issue with negative interest rates, and a focus of the IMF report, is what happens if people and firms decide to hold massive amounts of cash, which pays a zero interest rate, to avoid the negative interest rate. In Kenneth Rogoff's paper in the Summer 2017 issue of JEP, he makes the case for the practicality of moving gradually to a dual-currency system, where electronic money is the "real" currency and paper money trades with electronic money at a certain "exchange rate."  Rogoff writes:
"The idea of one country having two different currencies with an exchange rate between them may seem implausible, but the basics are not difficult to explain. The first step in setting up a dual currency system would be for the government to declare that the “real” currency is electronic bank reserves and that all government contracts, taxes, and payments are to be denominated in electronic dollars. As we have already noted, paying negative interest on electronic money or bank reserves is a nonissue. Say then that the government wants to set a policy interest rate of negative 3 percent to combat a financial crisis. To stop a run into paper currency, it would simultaneously announce that the exchange rate on paper currency in terms of electronic bank reserves would depreciate at 3 percent per year. For example, after a year, the central bank would give only .97 electronic dollars for one paper dollar; after two years, it would give back only .94. ...

"In most advanced countries, private agents are free to contract on whatever indexation scheme they prefer; this is not a condition that can be imposed by fiat. If the private sector does not convert to electronic currency, the zero bound would re-emerge since it still exists for paper currency. Finally, one must consider that after a period of negative interest rates, paper and electronic currency would no longer trade at par, which would be an inconvenience in normal times. Restoring par would require a period of paying positive interest rates on electronic reserves, which might potentially interfere with other monetary goals."
Rogoff recognizes that negative interest rates raise a number of practical and economic problems, including issues of regulatory, accounting, and tax policy. But from his perspective, negative interest rates are the best of the alternatives when a central bank faces the problem of a zero lower bound on interest rates. For example, quantitative easing only seems to have mild effects, while exposing the central bank to political pressures about who gets the loans from the central bank. Re-setting the central bank inflation target from 2% to 4% might help push up nominal interest rates, and thus allow those rates to be cut in a future recession while remaining above-zero, but given that central banks have spent decades establishing their goal of 2% inflation in the minds and expectations of financial markets, such a shift isn't to be contemplated lightly. Looking at these and other policy options--like all countries simultaneously trying to weaken their currencies in order to boost exports--Rogoff argues that negative interest rates are the simplest and cleanest option, with the best chance of working well.

From my own point of view, negative policy interest rates are one of those subjects that literally never crossed my mind up until about 2009. When the central banks of smaller economies like Denmark and Switzerland first used negative policy interest rates, but the main goal seemed to be to assure that the exchange rate of their currencies didn't soar. I wasn't quite ready to draw lessons for the US Federal Reserve from the Swiss National Bank or Danmarks Nationalbank. But when the Bank of Japan and the European Central Bank started employing mildly negative interest rates, and it seemed to be working without major glitches, it became clear that more serious attention needed to be paid. I remain dubious about interest rates in the range of negative 3-5%, but my reasons are less about technical economics and more about potential counterreactions.

Back in the 1970s, people put up with the idea that the inflation rate was higher than the interest on their bank account or on Treasury bonds, but the nominal interest rates they received was still positive. Maybe the public in other countries would accept a situation in which their bank accounts were eroded by 3-5% per year by a negative interest rates, but I have a hard time imagining that this would fly in a US political context. In an economy where negative interest rates are common, I would also expect large financial institutions like pension funds, insurance companies, and banks to make strenuous efforts to sidestep their effects. I've reached the point where I'm willing to consider negative interest rates as a serious possibility, but I suspect that the practical problems and issues of substantially negative interest rates are at this point underestimated.

Saturday, August 5, 2017

Fighting Colony Collapse Disorder: How Beekeepers Make More Bees

Bees and pollination play an important supporting actor role in economic discussions of how and when markets will work well.

In a 1952 article (Economic Journal, "External Economies and Diseconomies in a Competitive Situation") , James Meade suggested some problems that could arise between an apple farmer and a beekeeper. In Meade's example, if an apple farmer thought about expanding the orchard, part of the economic benefit would be that local bees could make more honey. However, the apple farmer would not benefit from the gains in honey-making, and thus would have a reduced incentive to expand the orchard. Conversely, if a beekeeper and honey-producer is considering expanding the number of bees, the apple farmer would also benefit. However, because the beekeeper would not benefit from the increased apple production, there was a reduced incentive to increase the number of bees.

But Meade's example was hypothetical. In a 1973 article, Stephen Cheung published "The Fable of the Bees: An Economic Investigation" (Journal of Law and Economics, April 1973). After considering actual contracts and pricing between beekeepers and apple-producers in Washington state, he reported that in the real world, they were coordinating their efforts just fine.

I spelled out these arguments three years ago in "Do Markets Work for Bees?" (July 10, 2014). Bees and market were in the news, because of a fear of Colony Collapse Disorder. Here's the cover of TIME magazine on August 19, 2013. By 2014, President Obama had appointed a Pollinator Health Task Force to create a National Pollinator Health Strategy, with representation from 17 different government agencies.

Image result for time magazine cover bees

So here we are, three years later. How have markets adapted to the danger of "a world without bees," as the TIME magazine cover put it? Shawn Regan tells the story of "How Capitalism Saved the Bees:A decade after colony collapse disorder began, pollination entrepreneurs have staved off the beepocalypse," in the August/September issue of Reason magazine.

The short take is that Colony Collapse Disorder is real, although its causes remain a source of some dispute. The Environmental Protection Agency lists the possible causes like this:
  • Increased losses due to the invasive varroa mite (a pest of honey bees).
  • New or emerging diseases such as Israeli Acute Paralysis virus and the gut parasite Nosema.
  • Pesticide poisoning through exposure to pesticides applied to crops or for in-hive insect or mite control.
  • Stress bees experience due to management practices such as transportation to multiple locations across the country for providing pollination services. 
  • Changes to the habitat where bees forage.
  • Inadequate forage/poor nutrition.
  • Potential immune-suppressing stress on bees caused by one or a combination of factors identified above.

As Regan reports: 
"And beekeepers are still reporting above-average bee deaths. In 2016, U.S. beekeepers lost 44 percent of their colonies over the previous year, the second-highest annual loss reported in the past decade. But here's what you might not have heard. Despite the increased mortality rates, there has been no downward trend in the total number of honeybee colonies in the United States over the past 10 years. Indeed, there are more honeybee colonies in the country today than when colony collapse disorder began."
The reason is straightforward. Beekeepers have had to deal episodes of colony collapse disorder on average every decade or so. They fight back against the bee diseases as best they can. And they create new hives. Here's Regan:
"There have been 23 episodes of major colony losses since the late 1860s. Two of the most recent bee killers are Varroa mites and tracheal mites, two parasites that first appeared in North America in the 1980s. ... Beekeepers have developed a variety of strategies to combat these afflictions, including the use of miticides, fungicides, and other treatments. While colony collapse disorder presents new challenges and higher mortality rates, the industry has found ways to adapt.

"Rebuilding lost colonies is a routine part of modern beekeeping. The most common method involves splitting a healthy colony into multiple hives—a process that beekeepers call “making increase.” The new hives, known as “nucs” or “splits,” require a new fertilized queen bee, which can be purchased from a com-mercial queen breeder. These breeders produce hundreds of thousands of queen bees each year. A new fertilized queen typically costs about $19 and can be shipped to beekeepers overnight. (One breeder's online ad touts its queens as “very prolific, known for their rapid spring buildup, and…extremely gentle.”) As an alternative to purchasing queens, beekeepers can produce their own queens by feeding royal jelly to larvae.

"Beekeepers regularly split their hives prior to the start of pollination season or later in the summer in anticipation of winter losses. The new hives quickly produce a new brood, which in about six weeks can be strong enough to pollinate crops. Often, beekeepers can replace more bees by splitting hives than they lose over the winter, resulting in no net loss to their colonies.

"Another way to rebuild a colony is to purchase “packaged bees” to replace an empty hive. (A 3-pound package typically costs about $90 and includes roughly 12,000 worker bees and a fertilized queen.) A third method is to replace an older queen with a new one. A queen bee is a productive egg-layer for one or two seasons; after that, replacing her will reinvigorate the health of the hive. If the new queen is accepted—as she often is when an experienced beekeeper installs her—the hive can be productive right away.

"Replacing lost colonies by splitting hives is surprisingly straightforward and can be accomplished in about 20 minutes. New queens and packaged bees are also inexpensive. If a commercial beekeeper loses 100 of his hives, replacing them would come at a cost—the price of each new queen, plus the time required to split the existing hives—but it is unlikely to spell disaster. And because new hives can be up and running in short order, there is little or no lost time for pollination or honey production. As long as some healthy hives remain that can be used for splitting, beekeepers can quickly and easily rebuild lost colonies." 
Of course, there are still legitimate concerns about the health of wild bees, and their role in natural ecosystems.  But it seems fairly clear that the buzz over how colony collapse disorder threatened an imminent bee extinction--"a world without bees" and "beemaggedon" and all the rest--was grossly exaggerated. As the EPA reports:
"Once thought to pose a major long term threat to bees, reported cases of CCD have declined substantially over the last five years. The number of hives that do not survive over the winter months – the overall indicator for bee health – has maintained an average of about 28.7 percent since 2006-2007 but dropped to 23.1 percent for the 2014-2015 winter. While winter losses remain somewhat high, the number of those losses attributed to CCD has dropped from roughly 60 percent of total hives lost in 2008 to 31.1 percent in 2013; in initial reports for 2014-2015 losses, CCD is not mentioned."

For more detail on economic adaptations to colony collapse disorder, and how actions by beekeepers have kept any economic losses very small, a useful starting point is the January 2016 working paper, "Colony Collapse and the Economic Consequences of  Bee Disease: Adaptation to Environmental Change," by Randal R. Rucker, Walter N. Thurman, and Michael Burgett.