Why I encourage econ PhD students to learn Julia

17 September 2018 by

Julia is a scientific computing language that an increasing number of economists are adopting (e.g., Tom Sargent, the NY FRB). It is a close substitute for Matlab, and the cost of switching from Matlab to Julia is somewhat modest since Julia syntax is quite similar to Matlab syntax after you change array references from parentheses to square brackets (e.g., “A(2, 2)” in Matlab is “A[2, 2]” in Julia and most other languages), though there are important differences. Julia also competes with Python, R, and C++, among other languages, as a computational tool.

I am now encouraging students to try Julia, which recently released version 1.0. I first installed Julia in the spring of 2016, when it was version 0.4. Julia’s advantages are that it is modern, elegant, open source, and often faster than Matlab. Its downside is that it is a young language, so its syntax is evolving, its user community is smaller, and some features are still in development.

A proper computer scientist would discuss Julia’s computational advantages in terms of concepts like multiple dispatch and typing of variables. For an unsophisticated economist like me, the proof of the pudding is in the eating. My story is quite similar to that of Bradley Setzler, whose structural model that took more than 24 hours to solve in Python took only 15 minutes using Julia. After hearing two of my computationally savvy Booth colleagues praise Julia, I tried it out when doing the numerical simulations in our “A Spatial Knowledge Economy” paper. I took my Matlab code, made a few modest syntax changes, and found that my Julia code solved for equilibrium in only one-sixth of the time that my Matlab code did. My code was likely inefficient in both cases, but that speed improvement persuaded me to use Julia for that project.

For a proper comparison of computational performance, you should look at papers by S. Boragan Aruoba and Jesus Fernandez-Villaverde and by Jon Danielsson and Jia Rong Fan. Aruoba and Fernandez-Villaverde have solved the stochastic neoclassical growth model in a dozen languages. Their 2018 update says “C++ is the fastest alternative, Julia offers a great balance of speed and ease of use, and Python is too slow.” Danielsson and Fan compared Matlab, R, Julia, and Python when implementing financial risk forecasting methods. While you should read their rich comparison, a brief summary of their assessment is that Julia excels in language features and speed but has considerable room for improvement in terms of data handling and libraries.

While I like Julia a lot, it is a young language, which comes at a cost. In March, I had to painfully convert a couple research projects written in Julia 0.5 to version 0.6 after an upgrade of GitHub’s security standards meant that Julia 0.5 users could no longer easily install packages. My computations were fine, of course, but a replication package that required artisanally-installed packages in a no-longer-supported environment wouldn’t have been very helpful to everyone else. I hope that Julia’s 1.0 release means that those who adopt the language now are less likely to face such growing pains, though it might be a couple of months before most packages support 1.0.

At this point, you probably should not use Julia for data cleaning. To be brief, Danielsson and Fan say that Julia is the worst of the four languages they considered for data handling. In our “How Segregated is Urban Consumption?” code, we did our data cleaning in Stata and our computation in Julia. Similarly, Michael Stepner’s health inequality code relies on Julia rather than Stata for a computation-intensive step and Tom Wollmann split his JMP code between Stata and Julia. At this point, I think most users would tell you to use Julia for computation, not data prep. (Caveat: I haven’t tried the JuliaDB package yet.)

If you want to get started in Julia, I found the “Lectures in Quantitative Economics” introduction to Julia by Tom Sargent and John Stachurski very helpful. Also look at Bradley Setzler’s Julia economics tutorials.

Trade economists might be interested in the Julia package FixedEffectModels.jl. It claims to be an order of magnitude faster than Stata when estimating two-way high-dimensional fixed-effects models, which is a bread-and-butter gravity regression. I plan to ask PhD students to explore these issues this fall and will report back after learning more.

Advertisements

The top five journals in economics are accessible, if authors share

16 August 2018 by

I tweeted this, but a blog post seems more appropriate (screenshots of URLs are unhelpful, tweets aren’t indexed by Google, etc).

The top five journals in economics permit authors to either post the published PDF on their personal website or provide a free-access link to the published article.

  • American Economic Review: “Authors are permitted to post published versions of their articles on their personal websites.”
  • Econometrica: “Authors receive a pdf copy of the published article which they can make available for non-commerial use.”
  • Journal of Political Economy: “Authors may also post their article in its published form on their personal or departmental web.”
  • Quarterly Journal of Economics and Review of Economic Studies: “Upon publication, the corresponding author is sent a free-access link to the online version of their paper. This link may be shared with co-authors and interested colleagues, and posted on the author’s personal or institutional webpage.”

Thus, articles in the top five economics journals are accessible to the general public at no fee, provided that the authors of those articles make the effort to share them. Other journals may not be so accessible. A lot of field journals are published by Elsevier, which has less generous sharing policies.

Is it easier to liberalize agriculture via bilateral or multilateral deals?

23 June 2018 by

Tyler Cowen’s latest Bloomberg column is about bilateral trade deals. He’s more optimistic than most:

The smartest case for trade bilateralism is that trade in many goods is already fairly free, but some egregious examples of tariffs and trade barriers remain. Look at agriculture, European restrictions on beef hormones in beef, and the Chinese unwillingness to allow in foreign companies. Targeted strategic bargaining, backed by concrete threats emanating from a relatively powerful nation — in this case the U.S. — could demand removal of those restrictions. Furthermore, the negotiating process would be more directly transactional and less cartelized and bureaucratic.

With regard to liberalizing agriculture, I think the conventional wisdom is that multilateral negotiations are superior. Here’s Jagdish Bhagwati talking to the NY Times back in 2004:

The only way concessions can be made on agricultural subsidies is if you go multilateral. Think of production subsidies, which the United States has: they can’t be cut for just one trading partner. When it comes to export subsidies–which are the big issue for the Europeans and a little bit for us too–we will cut export subsidies say, for Brazil, in a bilateral negotiation, but the Europeans won’t. Then the Europeans will have an advantage. My point is that if subsidies are the name of the game in agriculture, if the foreign countries that export want to remove subsidies, they have to go multilateral.

 

On “hat algebra”

7 May 2018 by

This post is about “hat algebra” in international trade theory. Non-economists won’t find it interesting.

What is “hat algebra”?

Alan Deardorff’s Glossary of International Economics defines “hat algebra” as

The Jones (1965) technique for comparative static analysis in trade models. Totally differentiating a model in logarithms of variables yields a linear system relating small proportional changes (denoted by carats (^), or “hats”) via elasticities and shares. (As published it used *, not ^, due to typographical constraints.)

The Jones and Neary (1980) handbook chapter calls it a circumflex, not a hat, when explaining its use in proving the Stolper-Samuelson theorem:

a given proportional change in commodity prices gives rise to a greater proportional change in factor prices, such that one factor price unambiguously rises and the other falls relative to both commodity prices… the changes in the unit cost and hence in the price of each commodity must be a weighted average of the changes in the two factor prices (where the weights are the distributive shares of the two factors in the sector concerned and a circumflex denotes a proportional change)… Since each commodity price change is bounded by the changes in both factor prices, the Stolper-Samuelson theorem follows immediately.

I’m not sure when “hat algebra” entered the lexicon, but by 1983 Brecher and Feenstra were writing “Eq. (20) may be obtained directly from the familiar ‘hat’ algebra of Jones (1965)”.

What is “exact hat algebra”?

Nowadays, trade economists utter the phrase “exact hat algebra” a lot. What do they mean? Dekle, Eaton, and Kortum (2008) describe a procedure:

Rather than estimating such a model in terms of levels, we specify the model in terms of changes from the current equilibrium. This approach allows us to calibrate the model from existing data on production and trade shares. We thereby finesse having to assemble proxies for bilateral resistance (for example, distance, common language, etc.) or inferring parameters of technology.

Here’s a simple example of the approach. Let’s do a trade-cost counterfactual in an Armington model with labor endowment L, productivity shifter \chi, trade costs \tau, and trade elasticity \epsilon. The endogenous variables are wage w, income Y = w \cdot L, and trade flows X_{ij}. The two relevant equations are the market-clearing condition and the gravity equation.

Suppose trade costs change from \tau_{ij} to \tau'_{ij}, a shock \hat{\tau}_{ij} \equiv \frac{\tau'_{ij}}{\tau_{ij}}. By assumption, \hat{\chi}=\hat{L}=1. We’ll solve for the endogenous variables \hat{\lambda}_{ij}, \hat{X}_{ij} and \hat{w}_{i}. Define “sales shares” by \gamma_{ij}\equiv\frac{X_{ij}}{Y_{i}}. Algebraic manipulations deliver a “hat form” of the market-clearing condition.

Similarly, let’s obtain “”hat form” of the gravity equation.

Combining equations (1.1) and (1.2) under the assumptions that \hat{Y}_{i}=\hat{X}_{i} and \hat{\chi}=\hat{L}=1, we obtain a system of equations characterizing an equilibrium \hat{w}_i as a function of trade-cost shocks \hat{\tau}_{ij}, initial equilibrium shares \lambda_{ij}, and \gamma_{ij}, and the trade elasticity \epsilon:

If we use data to pin down \epsilon, \lambda_{ij}, and \gamma_{ij}, then we can feed in trade-cost shocks \hat{\tau} and solve for \hat{w} to compute the predicted responses of \lambda'_{ij}.

Why is this “exact hat algebra”? When introducing material like that above, Costinot and Rodriguez-Clare (2014) say:

We refer to this approach popularized by Dekle et al. (2008) as “exact hat algebra.”… One can think of this approach as an “exact” version of Jones’s hat algebra for reasons that will be clear in a moment.

What is “calibrated share form”?

Dekle, Eaton, and Kortum (AERPP 2007, p.353-354; IMF Staff Papers 2008, p.522-527) derive the “exact hat algebra” results without reference to any prior work. Presumably, Dekle, Eaton, and Kortum independently derived their approach without realizing a connection to techniques used previously in the computable general equilibrium (CGE) literature. The CGE folks call it “calibrated share form”, as noted by Ralph Ossa and Dave Donaldson.

A 1995 note by Thomas Rutherford outlines the procedure:

In most large-scale applied general equilibrium models, we have many function parameters to specify with relative ly few observations. The conventional approach is to calibrate functional parameters to a single benchmark equilibrium… Calibration formulae for CES functions are messy and difficult to remember. Consequently, the specification of function coefficients is complicated and error-prone. For applied work using calibrated functions, it is much easier to use the “calibrated share form” of the CES function. In the calibrated form, the cost and demand functions explicitly incorporate

  • benchmark factor demands
  • benchmark factor prices
  • the elasticity of substitution
  • benchmark cost
  • benchmark output
  • benchmark value shares

Rutherford shows that the CES production function y(K,L) = \gamma \left(\alpha K^{\rho} + (1-\alpha)L^{\rho}\right)^{1/\rho} can be calibrated relative to a benchmark with output \bar{y}, capital \bar{K}, and labor \bar{L} as y = \bar{y} \left[\theta \left(\frac{K}{\bar{K}}\right)^{\rho} + (1-\theta)\left(\frac{L}{\bar{L}}\right)^{\rho}\right]^{1/\rho}, where \theta is the capital share of factor income. If we introduce “hat notation” with \hat{y} = y/\bar{y}, we get \hat{y} = \left[\theta \hat{K}^{\rho} + (1-\theta)\hat{L}^{\rho}\right]^{1/\rho}. Similar manipulations of the rest of the equations in the model delivers a means of computing counterfactuals in the CGE setting.

What economic activities are “tradable”?

6 January 2018 by

I’ve had a couple conversations with graduate students in recent months about classifying industries or occupations by their tradability, so here’s a blog post reviewing some of the relevant literature.

A number of papers emphasize predictions that differ for tradable and non-tradable activities. Perhaps the most famous is Atif Mian and Amir Sufi’s Econometrica article showing that counties with a larger decline in housing net worth experienced a larger decline in non-tradable employment.

Mian and Sufi define industries’ tradability by two different means, one yielding a discrete measure and the other continuous variation:

The first method defines retail- and restaurant-related industries as non-tradable, and industries that show up in global trade data as tradable. Our second method is based on the idea that industries that rely on national demand will tend to be geographically concentrated, while industries relying on local demand will be more uniformly distributed. An industry’s geographical concentration index across the country therefore serves as an index of “tradability.”

Inferring tradability is hard. Since surveys of domestic transactions like the Commodity Flow Survey don’t gather data on the services sector, measures like “average shipment distance by industry” (Table 5a of the 2012 CFS) are only available for manufacturing, mining, and agricultural industries. Antoine Gervais and Brad Jensen have also pursued the idea of using industries’ geography concentration to reveal their tradability, allowing them to compare the level of trade costs in manufacturing and services. One shortcoming of this strategy is that the geographic concentration of economic activity likely reflects both sectoral variation in tradability and sectoral variation in the strength of agglomeration forces. That may be one reason that Mian and Sufi discretize the concentration measure, categorizing “the top and bottom quartile of industries by geographical concentration as tradable and non-tradable, respectively.”

We might also want to speak to the tradability of various occupations. Ariel Burstein, Gordon Hanson, Lin Tian, and Jonathan Vogel’s recent paper on the labor-market consequences of immigration varying with occupations’ tradability is a nice example. They use “the Blinder and Krueger (2013) measure of `offshorability’, which is based on professional coders’ assessments of the ease with which each occupation could be offshored” (p.20). When they look at industries (Appendix G), they use an approach similar to that of Mian and Sufi.

Are there other measure of tradability in the literature?

Trade JMPs (2017-2018)

13 November 2017 by

It’s that time of year again. As I’ve done since 2010, I’ve gathered a list of trade-related job-market papers. New this year is a small collection of spatial economics papers that aren’t about trade per se. If I’ve missed someone, please contribute to the list in the comments.

Spatial Economics

 

I’m hiring research assistants

23 October 2017 by

If you are a student interested in earning an economics PhD, you should consider working as a research assistant before starting graduate school. Working on someone else’s research projects is an opportunity to learn a lot about the research process that is never taught in PhD courses. Learning by doing is a powerful force.

I’m hiring people to start working with me in summer of 2018.  Apply here: http://www.nber.org/jobs/Dingel_Chicago%20Booth.pdf. More generally, you can find a list of such opportunities on the NBER website.

Co-authoring is not about comparative advantage

16 October 2017 by

Comparative advantage is one of our field’s defining insights and “an essential part of every economist’s intellectual toolkit“. The principle is both true and non-obvious, so understanding it separates those who have taken an economics class from those who have not. While economists are rightfully proud of comparative advantage, there is at least one circumstance in which I think economists overuse it.

If you chat with economists about their co-authored research, you’ll often hear them casually attribute the division of labor within their research team to comparative advantage. I’m sure I’ve said this a number of times myself. But co-authoring is not about comparative advantage.

Suppose producing a paper involves two tasks: solving a model and estimating it. If you are better at both tasks than your co-author, then you ought to do both yourself and break up with your co-author. My advice seems contrary to David Ricardo’s famous insight that there are still gains from specialization and trade when one party has absolute advantage in both tasks. But the optimal assignment of tasks does not always depend on comparative advantage.

The Ricardian production function

The principle of comparative advantage is tied to a particular production function. In the Ricardian model, production functions are linear. Thus, individuals’ marginal products are constant. This fact allows us to describe individuals’ choices in terms of relative productivities and relative prices.

In a Ricardian world, the ordering of task assignments depends only on relative productivities: at any relative price, an individual has comparative advantage in the task in which her relative productivity is higher. Absolute productivities show up in a market-clearing condition that determines the relative prices necessary for supply of each task to equal its demand.

Does this sound like co-authorship? Some of the institutional details are wrong. Co-authors don’t usually pay each other for their output. Adding more people may pay off because each of n co-authors can receive more than 1/n credit. But beyond the unusual features of “selling” your output to academia, the Ricardian model’s description of the production process as a research team just doesn’t fit.

Producing research as a team

As Michael Sattinger (1993) explains, not all assignment models are models of comparative advantage:

Some economists may believe that comparative advantage is the only production principle underlying the assignment of workers to jobs, but this is incorrect. As a counterexample, consider an economy in which a job is associated with the use of a particular machine that can be used by only one person at a time…
The reason comparative advantage does not indicate the optimal assignment in this case is that earnings from a job are no longer proportional to physical output at the job. With cooperating factors of production (either explicit in the form of a machine or implicit via a scarcity in the jobs available), an opportunity cost for the cooperating factor must be subtracted from the value of output to yield the earnings.

In the Ricardian model, absolute disadvantage is not a problem, because quantity can make up for quality. If the laborers assigned to a task have low productivity, more labor can be employed in that task to produce more output. But in many situations, quantity cannot substitute for quality. This is most obvious in sports, where rules constrain team size: a hockey team can only have one goaltender. When jobs are scarce, comparative advantage does not determine the optimal assignment.

In a famous applied theory paper, Michael Kremer explored the consequences of producing in a team in which the number of tasks is fixed, each task may be performed by only one person, and a mistake in any one task diminishes the entire project’s value. The latter feature makes this the “O-Ring Theory of Development”, as the space shuttle Challenger blew up due to the failure of only one of its thousands of components.

This production function sounds more like the economics research process. A paper is a discrete unit of output, and it is likely only as persuasive as its weakest link. Poor writing can totally obfuscate good theory. Rarely can a beautiful theory salvage garbage empirics. And it is hard to believe that input quantity can substitute for input quality: “this paper was written by mediocre theorists, but there were so many of them working on it!”

In Kremer’s O-Ring model, the efficient assignment is that workers of similar skill work together in teams. A great theorist pairs with a great empiricist. As a first pass, this seems a reasonable description of the co-authorships we actually observe.

Co-authoring is not about comparative advantage

Of course, production is more complicated than that. How do we explain the valuable contributions of research assistants to projects when their supervisors (would like to claim that they) have absolute advantage across all tasks? One needs a model of hierarchical or sequential production in which research assistants handle easier problems and then pass on unsolved problems to their supervisors. Luis Garicano, Esteban Rossi-Hansberg, and co-authors have studied these knowledge-based hieararchies in environments ranging from law firms to exporters.

In short, the optimal assignment depends on the nature of the production function. Despite economists’ frequent invocation of our beloved insight, co-authoring is not about comparative advantage.

Linkages between international trade and urban economics

17 September 2017 by

Keith Head, Thierry Mayer, and Gianmarco Ottaviano have written a review of the latest Handbook of Regional and Urban Economics, published in 2015. The prior edition was published back in 2004. Part of their review looks at the interplay between international and urban economics:

The fourth volume of this series was published at the high point for the strand of research known as the New Economic Geography (NEG). It was a period when, united by interest in research by Paul Krugman, trade economists and spatial economists associated closely with each other. We attended the same conferences and worked on similar topics. We debated what was new and what was valuable about the NEG — and whether the two sets overlapped. The Nobel Prize received by Krugman in 2008 validated this line of research but also coincided with the time when it faded significantly from the priorities of urban economists.
Since then, with some prominent exceptions, trade and spatial economists have gone their separate ways.

This passage surprised me, since I see substantial overlap and collaboration between spatial and trade economists at the moment. Since I am a relatively young economist, I did not witness the previous peak or subsequent decline in collaboration.

Head, Mayer, and Ottaviano provide an explanation for the separation:

Spatial economists appear to us to have moved more in the direction of labor, both in terms of using similar worker-level data sets and in terms of greater focus on identification of treatment effects. Trade economists, on the other hand, have in some respects followed industrial organization, in terms of using firm- level data and in terms of tying in closely to theoretical models. Perhaps increased availability of micro data is a unified explanation for divergence as trade economists embraced firm-level customs data sets at the same time as urban economists embraced labor (and housing) data sets.
Though there are still some points of contact, the fifth volume of the handbook largely testifies this divergence since 2004. We would argue, however, that the stage is now set for renewed collaboration. Trade economists are increasingly using data on individual workers and urban economists have embraced structural models. Thus, the current separation between trade and spatial economics is probably mainly attributable to focus on different questions.

Another way to think about the linkages would be to look at co-authorships. For example, consider the Handbook chapter just mentioned: Matt Turner is an urban economist who teaches one of the few PhD courses in urban economics, and Steve Redding is the NBER International Trade and Investment program director.

You can also find individuals who span the spatial-trade divide. The Clark Medal committee describes Dave Donaldson as “an empirical trade economist”. The first two papers they mention are about the effects railroads in India and the United States on intranational trade.

Head, Mayer, and Ottaviano describe the separation in terms of research topics as opposed to toolkits:

Spatial economics has become… essentially intranational with virtually no international trade dimension… the model by Redding and Turner shares many properties with perfectly competitive stochastic trade models of “discrete choice” a la Eaton and Kortum (2002), which are the pillars of the recent wave of new quantitative models that are changing the way trade economists look ex ante at the possible implications of alternative policy scenarios. This shows once more that, whereas the questions of interest may have largely diverged between trade and spatial economics, methods have not.

Head, Mayer, and Ottaviano “are eager to see renewed linkages between international trade and urban economics” and somewhat optimistic about future research at this intersection. I am even more optimistic, since I already see many of the same people at both international economics and urban economics conferences.

Along those lines, Steve Redding and Esteban Rossi-Hansberg have written a survey of “Quantitative Spatial Economics,” which amounts to a new generation of work in spatial economics importing the tools developed in quantitative models of international trade. They’ve also issued a call for papers in Trade and Geography:

The endogenous location of economic agents relative to one another in space influences their consumption, production and investment decisions. It affects their pattern and volume of trade, the markets that they participate in, and the way they organize production processes across locations. As such, geography shapes the impact of local, regional, industry, and aggregate shocks, and the effects of national and local policies.

This Spring 2018 meeting of the NBER International Trade and Investment Program will focus on this set of issues. The meeting will welcome researchers interested in these topics from a variety of perspectives, including, but not limited to, international trade, regional and urban economics, labor, development, and macroeconomics. Both empirical and theoretical papers are welcome.

As someone who works at the intersection of international and urban economics, I may be prone to emphasizing the common features of these fields and the connections between them. But if we’re at the point where trade and urban have suffered a separation, I think the linkages are already renewing. I cannot wait to realize the fruits of greater collaboration.

On the NYT’s “Building Trade Walls”

8 March 2017 by

The Trump administration has certainly increased public discussion of trade policy. Yesterday, the New York Times ran a series of graphics accompanied by a thousand words on “Building Trade Walls” in its online business section. I found the piece frustrating. At a number of points, the article presents valid information in a way that muddles meanings or implies misleading conclusions.

Are sales taxes akin to import tariffs?

Consider these two consecutive paragraphs:

But many countries have additional taxes. For example, China and other countries, but not the United States, also charge a steep value-added tax, which is a kind of national sales tax on imports and home-produced goods alike. Exports are exempt from value-added taxes.

Once value-added taxes and sales taxes are included in an international comparison, America’s taxes on imports are much lower than those of almost every other country.

Why would you make an international comparison that counted VATs or sales taxes as taxes on imports? Since VATs and sales taxes apply to both “imports and home-produced goods alike”, they aren’t protectionist. An import tariff applies to imports and not to domestic-produced goods. The discriminatory nature of the import tariff is why it protects domestic firms from foreign competitors.

Back in September, Paul Krugman lambasted Donald Trump for accusing Mexico’s VAT of being a trade barrier. Now, his NYT colleagues have produced a good-looking map that relies on this premise. A quick mashup of the two makes the contrast clear:

In short, the NYT‘s first paragraph noting the non-discriminatory nature of VATs and sales taxes means that the international comparison offered in the second paragraph is nonsensical.

Is China a developing country?

In another troublesome part of the article, a series of facts about China’s GDP and GDP per capita are somehow combined to say that China might be a developed economy:

Today, China’s designation as a developing country is more debatable. China is the world’s second-largest economy and the biggest producer of steel and cars.

Still, China trails most developed nations by some measures, and Chinese officials argue that it is still developing and does not yet qualify as industrialized.

China’s economy is still roughly two-thirds the size of the American economy, even though China has four times as many people. Average incomes in China are still one-fifth to one-quarter of levels in the United States, and much of China’s interior is still underdeveloped.

Based on the first paragraph, I have to ask: is Iceland a developing economy? It’s quite small, with a GDP of maybe $15 billion. But of course it’s developed, as it has a GDP per capita in the neighborhood of $50,000. Development is about income levels (and accompanying socioeconomic changes), not population size. I have never before seen GDP (as opposed to GDP per capita) used to inform the “developing country” designation.

The NYT article raises the (novel) question of whether China is developed or developing in the context of its trade-policy obligations:

The World Trade Organization, the global trade adjudicator, has allowed developing countries to impose far higher tariffs than industrialized countries, while they build up industries at home. China has been counted as a developing country.

This is unhelpful in a few important respects. First, the WTO does not designate countries as developing or developed. In the narrow areas where developing countries are given special and differential treatment, member nations identify themselves as developing. Second, the “global trade adjudicator” language is potentially confusing. Countries’ MFN tariff schedules are negotiated among member nations at the WTO. There’s no separate WTO entity announcing tariff rates for developing (or developed) countries. Past rounds of negotiations among members have resulted in the tariff schedules of China and other economies. Third, China has lower tariff bindings than a typical developing country, partly because it acceded to the WTO later than most developing countries. Branstetter and Lardy say that “China agreed to a set of conditions that were far more stringent than the terms under which other developing countries had acceded”. Compare China’s 10% average bound tariff to Brazil’s 31% or India’s 49%.  Fourth, there is not much evidence that “far higher tariffs” have allowed countries to “build up industries at home”. China’s export growth, in particular, has involved integration into global value chains and assembly processing, not import-substitution industrialization behind a tariff wall.

A few other concerns

  • It would be more helpful to plot the trade deficit as a percent of GDP than in nominal dollars.
  • A focus on the bilateral deficit with China is unhelpful.
  • A discussion of border adjustments that does not mention exchange rates omits a first-order feature, according to the policy’s proponents.
  • The NYT reporters say: “When China joined the W.T.O. in 2001, the expectation was that its tariffs would later be adjusted lower during global trade talks, known as the Doha Round. But those talks fell apart for a variety of reasons.” China’s bilateral negotiations with the US wrapped up in 1999. The protests in Seattle that year accompanied a failed round of WTO negotiations. I don’t think expectations of the Doha Round – which started two years later in a post-9/11 context – were clear when China’s accession protocol was being determined.