Do customs duties compound non-tariff trade costs? Not in the US

For mathematical convenience, economists often assume iceberg trade costs when doing quantitative work. When tackling questions of trade policy, analysts must distinguish trade costs from import taxes. For the same reason that multiplicative iceberg trade costs are tractable, in these exercises it is easiest to model trade costs as the product of non-policy trade costs and ad valorem tariffs. For example, when studying NAFTA, Caliendo and Parro (2015) use the following formulation:

Caliendo and Parro (REStud, 2015), equation (3)

This assumption’s modeling convenience is obvious, but do tariff duties actually compound other trade costs? The answer depends on the importing country. Here’s Amy Porges, a trade attorney, answering the query on Quora:

Tariff rates in most countries are levied on the basis of CIF value (and then the CIF value + duties is used as the basis for border adjustments for VAT or indirect taxes). CIF value, as Mik Neville explains, includes freight cost. As a result, a 5% tariff rate results in a higher total amount of tariffs on goods that have higher freight costs (e.g. are shipped from more distant countries).

The US is one of the few countries where tariffs are applied on the basis of FOB value. Why? Article I, section 9 of the US Constitution provides that “No Preference shall be given by any Regulation of Commerce or Revenue to the Ports of one State over those of another”, and this has been interpreted as requiring that the net tariff must be the same at every port. If a widget is loaded in Hamburg and shipped to NY, its CIF price will be different than if it were shipped to New Orleans or San Francisco. However the FOB price of the widget shipped from Hamburg will be the same regardless of destination.

Here’s a similar explanation from Neville Peterson LLP.

On page 460 of The Law and Policy of the World Trade Organization, we learn that Canada and Japan also take this approach.

Pursuant to Article 8.2, each Member is free either to include or to exclude from the customs value of imported goods: (1) the cost of transport to the port or place of importation; (2) loading, unloading, and handling charges associated with the transport to the port of place or importation; and (3) the cost of insurance. Note in this respect that most Members take the CIF price as the basis for determining the customs value, while Members such as the United States, Japan and Canada take the (lower) FOB price.

While multiplicative separability is a convenient modeling technique, in practice ad valorem tariff rates don’t multiply other trade costs for two of the three NAFTA members.

How Many Jobs Can be Done at Home?

Brent Neiman and I wrote a paper that tackles a simple question: “How Many Jobs Can be Done at Home?” The latest draft (April 16) is here. The full replication package is available on GitHub.

We estimate that 37% of US jobs, accounting for 46% of wages, can be performed entirely at home. Applying our occupational classifications to 85 other countries reveals that lower-income economies have a lower share of jobs that can be done at home.


This simple question is suddenly very important during this pandemic. See the Economist and Wall Street Journal for their reactions. I did an video interview with CEPR about our paper, which includes some thoughts about offshoring and the future of telecommuting. My comments to Vice appeared in a story titled “You’re Not Going Back to Normal Office Life for a Long, Long Time“.

Shift-share designs before Bartik (1991)

The phrase “Bartik (1991)” has become synonymous with the shift-share research designs employed by many economists to investigate a wide range of economic outcomes. As Baum-Snow and Ferreira (2015) describe, “one of the commonest uses of IV estimation in the urban and regional economics literature is to isolate sources of exogenous variation in local labor demand. The commonest instruments for doing so are attributed to Bartik (1991) and Blanchard and Katz (1992).”

The recent literature on the shift-share research design usually starts with Tim Bartik’s 1991 book, Who Benefits from State and Local Economic Development Policies?. Excluding citations of Roy (1951) and Jones (1971), Bartik (1991) is the oldest work cited in Adao, Kolesar, Morales (QJE 2019). The first sentence of Borusyak, Hull, and Jaravel’s abstract says “Many studies use shift-share (or “Bartik”) instruments, which average a set of shocks with exposure share weights.”

But shift-share analysis is much older. A quick search on Google Books turns up a bunch of titles from the 1970s and 1980s like “The Shift-share Technique of Economic Analysis: An Annotated Bibliography” and “Dynamic Shift‐Share Analysis“.

Why the focus on Bartik (1991)? Goldsmith-Pinkham, Sorkin, and Swift, whose paper’s title is “Bartik Instruments: What, When, Why and How”, provide some explanation:

The intellectual history of the Bartik instrument is complicated. The earliest use of a shift-share type decomposition we have found is Perloff (1957, Table 6), which shows that industrial structure predicts the level of income. Freeman (1980) is one of the earliest uses of a shift-share decomposition interpreted as an instrument: it uses the change in industry composition (rather than differential growth rates of industries) as an instrument for labor demand. What is distinctive about Bartik (1991) is that the book not only treats it as an instrument, but also, in the appendix, explicitly discusses the logic in terms of the national component of the growth rates.

I wonder what Tim Bartik would make of that last sentence. His 1991 book is freely available as a PDF from the Upjohn Institute. Here is his description of the instrumental variable in Appendix 4.2:

In this book, only one type of labor demand shifter is used to form instrumental variables2: the share effect from a shift-share analysis of each metropolitan area and year-to-year employment change.3 A shift-share analysis decomposes MSA growth into three components: a national growth component, which calculates what growth would have occurred if all industries in the MSA had grown at the all-industry national average; a share component, which calculates what extra growth would have occurred if each industry in the MSA had grown at that industry’s national average; and a shift component, which calculates the extra growth that occurs because industries grow at different rates locally than they do nationally…

The instrumental variables defined by equations (17) and (18) will differ across MSAs and time due to differences in the national economic performance during the time period of the export industries in which that MSA specializes. The national growth of an industry is a rough proxy for the change in national demand for its products. Thus, these instruments measure changes in national demand for the MSA’s export industries…

Back in Chapter 7, Bartik writes:

The Bradbury, Downs, and Small approach to measuring demand-induced growth is similar to the approach used in this book. Specifically, they used the growth in demand for each metropolitan area’s export industries to predict overall growth for the metropolitan area. That is, they used the share component of a shift-share analysis to predict overall growth.

Hence, endnote 3 of Appendix 4.2 on page 282:

This type of demand shock instrument was previously used in the Bradbury, Downs and Small (1982) book; I discovered their use of this instrument after I had already come up with my approach. Thus, I can only claim the originality of ignorance for my use of this type of instrument.

Tim once tweeted:

Researchers interested in “Bartik instrument” (which is not a name I coined!) might want to look at appendix 4.2, which explains WHY this is a good instrument for local labor demand. I sometimes sense that people cite my book’s instrument without having read this appendix.

Update (10am CT): In response to my query, Tim has posted a tweetstorm describing Bradbury, Downs, and Small (1982).

The rapid rise of spatial economics among JMCs

Two years ago, my list of trade candidates also included a dozen candidates in spatial economics. Last year I listed 20 candidates. There are 45 spatial-economics JMCs in this year’s list. That looks like a rapid rise.

Of course, measurement problems abound. My view of “spatial economics” may have broadened during the last two years, in which case the listings would tell you more about me than about the candidates. That would be hard to quantify. But, to focus on one label within the broader spatial economics nexus, I’m pretty sure that I’m seeing more candidates explicitly list “urban economics” as one of their fields than in years prior.

If I’m right that the supply of spatial economists is rising, then one immediately wonders if the demand side will keep pace. I haven’t looked at JOE postings, but I doubt that ads mentioning “urban economics” are growing at the same rate as candidates listing it as a field.

Last month, in response to a Beatrice Cherrier query about why urban economics’ “boundaries & identity are so difficult to pin down,” Jed Kolko noted that “urban economists typically align strongly to another field — trade, labor, PF, finance (esp the real estate types), macro.” That fluidity has advantages and disadvantages. It certainly makes it challenging to compile a list of relevant job-market candidates. But my very short time series of arbitrarily collated candidates suggests growth in the supply of young spatial economists.

Spatial economics JMPs (2019-2020)

Here’s a list of job-market candidates whose job-market papers fall within spatial economics, as defined by me when glancing at a webpage for a few seconds. Illinois has six candidates! I’m sure I missed folks, so please add them in the comments.

The annual list of trade candidates is a distinct post.

Of the 45 candidates I’ve initially listed, 18 used Google Sites, 12 registered a custom domain, 3 used GitHub, and 12 used school-provided webspace.

Here’s a cloud of the words that appear in these papers’ titles:

Trade JMPs (2019-2020)

It’s November again. Time flies, and there’s a new cohort of job-market candidates. Time really flies: I started this series a decade ago! Many members of that November 2010 cohort now have tenure or will soon.

As usual, I’ve gathered a list of trade-related job-market papers. There is no clear market leader: the most candidates from one school by my count is three (Berkeley, Maryland, UCLA). If I’ve missed someone, please contribute to the list in the comments.

A separate post lists candidates in spatial economics, broadly defined.

Of the 31 candidates I’ve initially listed, 14 registered a custom domain, 9 used Google Sites, 2 used GitHub, and only 6 use school-provided webspace.

Here’s a cloud of the words that appear in these papers’ titles:

Why your research project needs build automation

Software build tools automate compiling source code into executable binaries. (For example, if you’ve installed Linux packages, you’ve likely used Make.)

Like software packages, research projects are large collections of code that are executed in sequence to produce output. Your research code has a first step (download raw data) and a last step (generate paper PDF). Its input-output structure is a directed graph (dependency graph).

The simplest build approach for a Stata user is a “master” do file. If a project involves A through Z, this master file executes A, B, …, Y, and Z in order. But the “run everything” approach is inefficient: if you edit Y, you only need to run Y and Z; you don’t need to run A through X again. Software build tools automate these processes for you. They can be applied to all of your research code.

Build tools use a dependency graph and information about file changes (e.g., timestamps) to produce output using (all and only) necessary steps. Build automation is valuable for any non-trivial research project. Build automation can be particularly valuable for big data. If you need to process data for 100 cities, you shouldn’t manually track which cities are up-to-date and which need to run the latest code. Define the dependencies and let the build tool track everything.

Make is an old, widely used build tool. It should be available on every Linux box by default (e.g., it’s available inside the Census RDCs). For Mac users, Make is included in OS X’s developer tools. I use Make. There are other build tools. Gentzkow and Shapiro use SCons (a Python-based tool). If all of your code is Stata, you could try the project package written by Robert Picard, though I haven’t tried it myself.

A Makefile consists of a dependency graph and a recipe for each graph node. Define dependencies by writing a target before the colon and that target’s prerequisites after the colon. The next line gives the recipe that translates those inputs into output. Make can execute any recipe you can write on the command line.

I have written much more about Make and Makefiles in Section A.3 of my project template. Here are four introductions to Make, listed in the order that I suggest reading them: