# Market-size effects, across places and over time

The dividing line between neoclassical trade models and the now-quite-dated “new trade theory” is economies of scale. Neoclassical models feature constant (or decreasing) returns. Free trade is efficient in such settings. With the introduction of increasing returns, Brander, Spencer, Krugman, Helpman, and others “open[ed] the possibility that government intervention in trade via import restrictions, export subsidies, and so on may under some circumstances be in the national interest after all” (Krugman 1987).

The fact that “size matters” in new trade theory (size can influence the pattern of specialization because there are economies of scale) while it does not in neoclassical models became the basis for empirical investigations trying to distinguish these theories. Davis and Weinstein (2003) describe the idea behind this research strategy:

A fundamental divide may be identified between two classes of models. In the first class, unusually strong demand for a good, ceteris paribus, makes a country an importer of a good. An example would be a conventional two-sector neoclassical model with strictly downward sloping import demands. However, there is an alternative tradition within the trade literature which emphasizes an important interaction between demand conditions and production opportunities in which the production response to local demand conditions is so powerful that strong local demand for a product leads a country to export that product. When such conditions exist, the literature terms it a home market effect.

Stepping away from trade, there’s a very different economic context in which the role of market size is also crucial: the literature on innovation. The idea dates at least to Schmookler (1966) who memorably titled two of his chapters “The amount of invention is governed by the extent of the market.” It’s also key to endogenous growth theory. Acemoglu and Linn (2004) provided empirical evidence that market size influenced innovation in a particular sector:

This paper investigates the effect of (potential) market size on entry of new drugs and pharmaceutical innovation. Focusing on exogenous changes driven by US demographic trends, we find a large effect of potential market size on the entry of nongeneric drugs and new molecular entities… Our results show that there is an economically and statistically significant response of the entry of new drugs to market size. As the baby boom generation aged over the past 30 years… the data show a corresponding decrease in the rate of entry of new drugs in categories mostly demanded by the young and an increase for drugs mostly consumed by the middle-aged.

In “The Determinants of Quality Specialization“, I showed that high-income cities manufacture higher-priced, higher-quality goods in part because they are home to more high-income households who demand such products. Quantitatively, I found that the home-market effect plays at least as large a role as the factor-abundance mechanism in quality specialization across cities of different income levels.

What does this have to do with Acemoglu and Linn (2004)? I didn’t see much of a connection when I was writing my paper. Pharmaceuticals were just one of many industries in my data on US manufacturing plants, and pharmaceutical pills are probably less sensitive to trade costs than most goods. But I now see a closer relationship between looking for home-market effects in the cross section and looking for market-size effects in the time series.

The primary bridge is a recent QJE article by Costinot, Donaldson, Kyle and Williams. They used variation in disease burdens across countries as a source of variation in demand for drugs to look for home-market effects in international pharmaceuticals production. I’ve blogged about that paper before.

The latest connection is a paper by Xavier Jaravel called “The Unequal Gains from Product Innovations: Evidence from the U.S. Retail Sector”. His article investigates the time-series analog of my cross-sectional results on quality specialization. In recent decades, income growth has been concentrated at the top of the income distribution. Did the increase in the relative size of the affluent market benefit the affluent beyond the straightforward income gains? With economies of scale, increases in demand could induce supply-side responses that favor affluent-demanded goods. That’s the home-market-effect story for why high-income cities are net exporters of high-quality products: due to increasing returns, greater demand elicits a more-than-proportionate production response. Jaravel documents the time-series equivalent for national outcomes: “(1) the relative demand for products consumed by high-income households increased because of growth and rising inequality; (2) in response, firms introduced more new products catering to such households; (3) as a result, the prices of continuing products in these market segments fell due to increased competitive pressure.”

As a result, a two-by-two matrix neatly summarizes these contributions to the empirical literature on market-size effects:

 Pharmaceuticals Vertically differentiated consumer goods Time series Acemoglu & Linn (2004) Jaravel (forthcoming) Cross section Costinot, Donaldson, Kyle, Williams (2019) Dingel (2017)

There’s an obvious relationship between the AL and CDKW papers, as explained by CDKW:

In their original article, Acemoglu and Linn (2004) exploit such demographic variation over time within the United States to estimate the impact of market size on innovation. Here, we employ the spatial analog of this strategy, drawing on cross-sectional variation in the demographic composition of different countries in a given year, to explore how exogenous variation in demand may shape the pattern of trade.

With the benefit of hindsight, some more subtle connections between the four cells of this two-by-two matrix seem pretty clear. For example, Jaravel’s adoption of the Acemoglu (2007) terminology for “weak bias” and “strong bias” in his footnote 3 mirrors the distinction between the weak and strong versions of the home-market effect introduced by Costinot, Donaldson, Kyle, and Williams (2019).

In summary, market-size effects seem to be important for understanding both innovation outcomes and the geographic pattern of specialization. We’ve found market-size effects in the time series and in the cross section, for both the pharmaceutical sector and vertically differentiated manufactured goods.

# Research resources that I recommend

While advising PhD students, I find myself repeatedly suggesting the same tools and tricks. Since these are general-purpose technologies, the following list of resources that I regularly recommend to my students might interest others as well. Going forward, I’ll update this webpage, not this blog post.

## The job market

• One year before you’ll be on the market, read John Cawley’s very comprehensive Guide and Advice For Economists on the US Junior Academic Job Market. The process will be more coherent and less intimidating if you see the big picture from the beginning.
• Give a full draft of your paper to your advisors in June. Sharing something in September is too late.

# Spatial economics JMPs (2018-2019)

Last year, I comingled spatial-economics candidates with the trade JMCs. This year, I’m offering a separate list. Thanks to those who suggested spatial-economics candidates in response to my tweet. Since people working on spatial economics come from a variety of fields, I’ve listed candidate’s self-identified fields in brackets after their JMP title. I’m sure I missed folks, so please add them in the comments.

• Clare-Balboni (LSE) – In Harm’s Way? Infrastructure Investments and the Persistence of Coastal Cities [Environmental, Trade, Development]
• Chiara Fratto (Chicago) – The reallocative effects of mobility restrictions on workers and firms. An application to the West Bank [applied macro, urban, productivity, trade]
• Xuan Fei (UC Davis) – Land Market Misallocation in China [International, Urban, Macro, and Chinese Economy]
• Laurel Wheeler (Duke) – Property Rights, Place-Based Policies, and Economic Development [labor and development]
• Dennis McWeeny (Wisconsin) – Spatial Competition in the Airline Industry [industrial organization, health, and regional]
• Yanjun (Penny) Liao (UCSD) – How Hurricanes Sweep Up Housing Markets: Evidence from Florida [Environmental, Public]
• Xueying Lu (UCSD) – Housing Markets and Automobile Policy [Environmental, Urban, Energy]
• Cailin Slattery (Virginia) – Bidding for Firms: Subsidy Competition in the US [public finance, urban, industrial organization]
• Mesay M. Gebresilasse (Boston U) – Rural Roads, Agricultural Extension, and Productivity [development and applied microeconomics]
• Tatjana Kleineberg (Yale) – Can We Save the American Dream? A Dynamic General Equilibrium Analysis of the Effects of School Financing on Local Opportunities [Macro, Economic Development, and Urban]
• Donghyuk Kim (Yale) – Government Incentives and Firm Location Choices [Industrial Organization, Urban, and Labor]
• Max Perez Leon (Yale) – Inducing Labor Mobility? Evidence from Peruvian Teacher Reallocation [Labor, Urban, and Personnel]
• Nicholas Li (Berkeley) – Housing Market Channels of Segregation [Labor, Urban, Development]
• Qing Zhang (Columbia) – Sunlight, Development Restrictions, and Urban Density [Development, Political Economy, Urban]
• Ben Klopack (Stanford) – One Size Fits All? The Value of Standardized Retail Chains [Industrial Organization, Urban, Public]
• Zhezhi Hou (Binghamton) – Growing from Spillovers: A Semiparametric Varying Coefficient Approach [econometrics and applied microeconomics]
• Soeren Henn (Harvard) – Complements or Substitutes: State Presence and the Power of Traditional Leaders [development and political economy]
• Jessica Brown (Princeton) – Does Public Pre-K have Unintended Consequences on the Child Care Market for Infants and Toddlers? [public and labor]
• Matthew Davis (Wharton) – The Distributional Impact of Mortgage Interest Subsidies: Evidence from Variation in State Tax Policies [Real Estate, Public Finance]
• Simon Franklin (Oxford/LSE) – The demand for government housing: evidence from a lottery for 200,000 homes in Ethiopia [Development, Labour, and Urban]

# What share of US manufacturing firms export?

What share of US manufacturing firms export? That’s a simple question. But my answer recently changed by quite a lot. While updating one of my class slides that is titled “very few firms export”, I noticed a pretty stark contrast between the old and new statistics I was displaying. In the table below, the 2002 numbers are from Table 2 of Bernard, Jensen, Redding, and Schott (JEP 2007), which reports that 18% of US manufacturing firms were exporters in 2002. The 2007 numbers are from Table 1 of Bernard, Jensen, Redding, and Schott (JEL 2018), which reports that 35% of US manufacturing firms were exporters in 2007.

NAICS Description Share of firms Exporting firm share Export sales share of exporters
2002 2007 2002 2007 2002 2007
311 Food Manufacturing 6.8 6.8 12 23 15 21
312 Beverage and Tobacco Product 0.7 0.9 23 30 7 30
313 Textile Mills 1.0 0.8 25 57 13 39
314 Textile Product Mills 1.9 2.7 12 19 12 12
315 Apparel Manufacturing 3.2 3.6 8 22 14 16
316 Leather and Allied Product 0.4 0.3 24 56 13 19
321 Wood Product Manufacturing 5.5 4.8 8 21 19 09
322 Paper Manufacturing 1.4 1.5 24 48 9 06
323 Printing and Related Support 11.9 11.1 5 15 14 10
324 Petroleum and Coal Products 0.4 0.5 18 34 12 13
325 Chemical Manufacturing 3.1 3.3 36 65 14 23
326 Plastics and Rubber Products 4.4 3.9 28 59 10 11
327 Nonmetallic Mineral Product 4.0 4.3 9 19 12 09
331 Primary Metal Manufacturing 1.5 1.5 30 58 10 31
332 Fabricated Metal Product 19.9 20.6 14 30 12 09
333 Machinery Manufacturing 9.0 8.7 33 61 16 15
334 Computer and Electronic Product 4.5 3.9 38 75 21 28
335 Electrical Equipment, Appliance 1.7 1.7 38 70 13 47
336 Transportation Equipment 3.4 3.4 28 57 13 16
337 Furniture and Related Product 6.4 6.5 7 16 10 14
339 Miscellaneous Manufacturing 9.1 9.3 2 32 15 16
Aggregate manufacturing 100 100 18 35 14 17

Did a huge shift occur between 2002 and 2007? No. The difference between these two tables is due to a change in the data source used to identify whether a firm exports. In their 2007 JEP article, BJRS used a question about export sales in the Census of Manufactures (CM). In their 2018 JEL article, BJRS used customs records from the Longitudinal Firm Trade Transactions database (LFTTD) that they built. Footnote 23 of the latter article notes that “the customs records from LFTTD imply that exporting is more prevalent than would be concluded based on the export question in the Census of Manufactures.”

This is a bit of an understatement: only about half of firms that export in customs records say that they export when asked about it in the Census of Manufactures! [This comparison is inexact because the share of exporting firms may have really increased from 2002 to 2007, but BJRS (2018) say that they “find a relatively similar pattern of results for 2007 as for 2002” when they use the CM question for both years.] The typical three-digit NAICS industry has the share of firms that export roughly double when using customs data rather than the Census of Manufactures survey response. Who knows what happened in “Miscellaneous Manufacturing” (NAICS 339), which had 2% in the 2002 CM and 35% in the 2007 LFTTD.

I presume that the customs records are more reliable than the CM question. More firms are exporters than I previously thought!

It’s already November again. Time flies. As I do annually, I’ve gathered a list of trade-related job-market papers. The market leader in trade this year is Penn State, which offers seven candidates. If I’ve missed someone, please contribute to the list in the comments. A few schools (e.g., UCLA, Yale) have not yet posted candidates.

[Nov 11 update: I’ve added a number of candidates since this was posted Nov 5. Now listing 40 people. I didn’t recompute stats nor word cloud.]

Of the 33 candidates I’ve initially listed, 16 use Google Sites, 8 registered their own domain, and only 5 use school-provided webspace (3 use Weebly; 1 GitHub).

Here’s a cloud of the words that appear at least twice in these papers’ titles:

# Why I encourage econ PhD students to learn Julia

Julia is a scientific computing language that an increasing number of economists are adopting (e.g., Tom Sargent, the NY FRB). It is a close substitute for Matlab, and the cost of switching from Matlab to Julia is somewhat modest since Julia syntax is quite similar to Matlab syntax after you change array references from parentheses to square brackets (e.g., “A(2, 2)” in Matlab is “A[2, 2]” in Julia and most other languages), though there are important differences. Julia also competes with Python, R, and C++, among other languages, as a computational tool.

I am now encouraging students to try Julia, which recently released version 1.0. I first installed Julia in the spring of 2016, when it was version 0.4. Julia’s advantages are that it is modern, elegant, open source, and often faster than Matlab. Its downside is that it is a young language, so its syntax is evolving, its user community is smaller, and some features are still in development.

A proper computer scientist would discuss Julia’s computational advantages in terms of concepts like multiple dispatch and typing of variables. For an unsophisticated economist like me, the proof of the pudding is in the eating. My story is quite similar to that of Bradley Setzler, whose structural model that took more than 24 hours to solve in Python took only 15 minutes using Julia. After hearing two of my computationally savvy Booth colleagues praise Julia, I tried it out when doing the numerical simulations in our “A Spatial Knowledge Economy” paper. I took my Matlab code, made a few modest syntax changes, and found that my Julia code solved for equilibrium in only one-sixth of the time that my Matlab code did. My code was likely inefficient in both cases, but that speed improvement persuaded me to use Julia for that project.

For a proper comparison of computational performance, you should look at papers by S. Boragan Aruoba and Jesus Fernandez-Villaverde and by Jon Danielsson and Jia Rong Fan. Aruoba and Fernandez-Villaverde have solved the stochastic neoclassical growth model in a dozen languages. Their 2018 update says “C++ is the fastest alternative, Julia offers a great balance of speed and ease of use, and Python is too slow.” Danielsson and Fan compared Matlab, R, Julia, and Python when implementing financial risk forecasting methods. While you should read their rich comparison, a brief summary of their assessment is that Julia excels in language features and speed but has considerable room for improvement in terms of data handling and libraries.

While I like Julia a lot, it is a young language, which comes at a cost. In March, I had to painfully convert a couple research projects written in Julia 0.5 to version 0.6 after an upgrade of GitHub’s security standards meant that Julia 0.5 users could no longer easily install packages. My computations were fine, of course, but a replication package that required artisanally-installed packages in a no-longer-supported environment wouldn’t have been very helpful to everyone else. I hope that Julia’s 1.0 release means that those who adopt the language now are less likely to face such growing pains, though it might be a couple of months before most packages support 1.0.

At this point, you probably should not use Julia for data cleaning. To be brief, Danielsson and Fan say that Julia is the worst of the four languages they considered for data handling. In our “How Segregated is Urban Consumption?” code, we did our data cleaning in Stata and our computation in Julia. Similarly, Michael Stepner’s health inequality code relies on Julia rather than Stata for a computation-intensive step and Tom Wollmann split his JMP code between Stata and Julia. At this point, I think most users would tell you to use Julia for computation, not data prep. (Caveat: I haven’t tried the JuliaDB package yet.)

If you want to get started in Julia, I found the “Lectures in Quantitative Economics” introduction to Julia by Tom Sargent and John Stachurski very helpful. Also look at Bradley Setzler’s Julia economics tutorials.

Trade economists might be interested in the Julia package FixedEffectModels.jl. It claims to be an order of magnitude faster than Stata when estimating two-way high-dimensional fixed-effects models, which is a bread-and-butter gravity regression. I plan to ask PhD students to explore these issues this fall and will report back after learning more.

# The top five journals in economics are accessible, if authors share

I tweeted this, but a blog post seems more appropriate (screenshots of URLs are unhelpful, tweets aren’t indexed by Google, etc).

The top five journals in economics permit authors to either post the published PDF on their personal website or provide a free-access link to the published article.

• American Economic Review: “Authors are permitted to post published versions of their articles on their personal websites.”
• Econometrica: “Authors receive a pdf copy of the published article which they can make available for non-commerial use.”
• Journal of Political Economy: “Authors may also post their article in its published form on their personal or departmental web.”
• Quarterly Journal of Economics and Review of Economic Studies: “Upon publication, the corresponding author is sent a free-access link to the online version of their paper. This link may be shared with co-authors and interested colleagues, and posted on the author’s personal or institutional webpage.”

Thus, articles in the top five economics journals are accessible to the general public at no fee, provided that the authors of those articles make the effort to share them. Other journals may not be so accessible. A lot of field journals are published by Elsevier, which has less generous sharing policies.

# Is it easier to liberalize agriculture via bilateral or multilateral deals?

Tyler Cowen’s latest Bloomberg column is about bilateral trade deals. He’s more optimistic than most:

The smartest case for trade bilateralism is that trade in many goods is already fairly free, but some egregious examples of tariffs and trade barriers remain. Look at agriculture, European restrictions on beef hormones in beef, and the Chinese unwillingness to allow in foreign companies. Targeted strategic bargaining, backed by concrete threats emanating from a relatively powerful nation — in this case the U.S. — could demand removal of those restrictions. Furthermore, the negotiating process would be more directly transactional and less cartelized and bureaucratic.

With regard to liberalizing agriculture, I think the conventional wisdom is that multilateral negotiations are superior. Here’s Jagdish Bhagwati talking to the NY Times back in 2004:

The only way concessions can be made on agricultural subsidies is if you go multilateral. Think of production subsidies, which the United States has: they can’t be cut for just one trading partner. When it comes to export subsidies–which are the big issue for the Europeans and a little bit for us too–we will cut export subsidies say, for Brazil, in a bilateral negotiation, but the Europeans won’t. Then the Europeans will have an advantage. My point is that if subsidies are the name of the game in agriculture, if the foreign countries that export want to remove subsidies, they have to go multilateral.

# On “hat algebra”

This post is about “hat algebra” in international trade theory. Non-economists won’t find it interesting.

### What is “hat algebra”?

Alan Deardorff’s Glossary of International Economics defines “hat algebra” as

The Jones (1965) technique for comparative static analysis in trade models. Totally differentiating a model in logarithms of variables yields a linear system relating small proportional changes (denoted by carats (^), or “hats”) via elasticities and shares. (As published it used *, not ^, due to typographical constraints.)

The Jones and Neary (1980) handbook chapter calls it a circumflex, not a hat, when explaining its use in proving the Stolper-Samuelson theorem:

a given proportional change in commodity prices gives rise to a greater proportional change in factor prices, such that one factor price unambiguously rises and the other falls relative to both commodity prices… the changes in the unit cost and hence in the price of each commodity must be a weighted average of the changes in the two factor prices (where the weights are the distributive shares of the two factors in the sector concerned and a circumflex denotes a proportional change)… Since each commodity price change is bounded by the changes in both factor prices, the Stolper-Samuelson theorem follows immediately.

I’m not sure when “hat algebra” entered the lexicon, but by 1983 Brecher and Feenstra were writing “Eq. (20) may be obtained directly from the familiar ‘hat’ algebra of Jones (1965)”.

### What is “exact hat algebra”?

Nowadays, trade economists utter the phrase “exact hat algebra” a lot. What do they mean? Dekle, Eaton, and Kortum (2008) describe a procedure:

Rather than estimating such a model in terms of levels, we specify the model in terms of changes from the current equilibrium. This approach allows us to calibrate the model from existing data on production and trade shares. We thereby finesse having to assemble proxies for bilateral resistance (for example, distance, common language, etc.) or inferring parameters of technology.

Here’s a simple example of the approach. Let’s do a trade-cost counterfactual in an Armington model with labor endowment $L$, productivity shifter $\chi$, trade costs $\tau$, and trade elasticity $\epsilon$. The endogenous variables are wage $w$, income $Y = w \cdot L$, and trade flows $X_{ij}$. The two relevant equations are the market-clearing condition and the gravity equation.

Suppose trade costs change from $\tau_{ij}$ to $\tau'_{ij}$, a shock $\hat{\tau}_{ij} \equiv \frac{\tau'_{ij}}{\tau_{ij}}$. By assumption, $\hat{\chi}=\hat{L}=1$. We’ll solve for the endogenous variables $\hat{\lambda}_{ij}, \hat{X}_{ij}$ and $\hat{w}_{i}$. Define “sales shares” by $\gamma_{ij}\equiv\frac{X_{ij}}{Y_{i}}$. Algebraic manipulations deliver a “hat form” of the market-clearing condition.

Similarly, let’s obtain “”hat form” of the gravity equation.

Combining equations (1.1) and (1.2) under the assumptions that $\hat{Y}_{i}=\hat{X}_{i}$ and $\hat{\chi}=\hat{L}=1$, we obtain a system of equations characterizing an equilibrium $\hat{w}_i$ as a function of trade-cost shocks $\hat{\tau}_{ij}$, initial equilibrium shares $\lambda_{ij}$, and $\gamma_{ij}$, and the trade elasticity $\epsilon$:

If we use data to pin down $\epsilon, \lambda_{ij}$, and $\gamma_{ij}$, then we can feed in trade-cost shocks $\hat{\tau}$ and solve for $\hat{w}$ to compute the predicted responses of $\lambda'_{ij}$.

Why is this “exact hat algebra”? When introducing material like that above, Costinot and Rodriguez-Clare (2014) say:

We refer to this approach popularized by Dekle et al. (2008) as “exact hat algebra.”… One can think of this approach as an “exact” version of Jones’s hat algebra for reasons that will be clear in a moment.

### What is “calibrated share form”?

Dekle, Eaton, and Kortum (AERPP 2007, p.353-354; IMF Staff Papers 2008, p.522-527) derive the “exact hat algebra” results without reference to any prior work. Presumably, Dekle, Eaton, and Kortum independently derived their approach without realizing a connection to techniques used previously in the computable general equilibrium (CGE) literature. The CGE folks call it “calibrated share form”, as noted by Ralph Ossa and Dave Donaldson.

A 1995 note by Thomas Rutherford outlines the procedure:

In most large-scale applied general equilibrium models, we have many function parameters to specify with relative ly few observations. The conventional approach is to calibrate functional parameters to a single benchmark equilibrium… Calibration formulae for CES functions are messy and difficult to remember. Consequently, the specification of function coefficients is complicated and error-prone. For applied work using calibrated functions, it is much easier to use the “calibrated share form” of the CES function. In the calibrated form, the cost and demand functions explicitly incorporate

• benchmark factor demands
• benchmark factor prices
• the elasticity of substitution
• benchmark cost
• benchmark output
• benchmark value shares

Rutherford shows that the CES production function $y(K,L) = \gamma \left(\alpha K^{\rho} + (1-\alpha)L^{\rho}\right)^{1/\rho}$ can be calibrated relative to a benchmark with output $\bar{y}$, capital $\bar{K}$, and labor $\bar{L}$ as $y = \bar{y} \left[\theta \left(\frac{K}{\bar{K}}\right)^{\rho} + (1-\theta)\left(\frac{L}{\bar{L}}\right)^{\rho}\right]^{1/\rho}$, where $\theta$ is the capital share of factor income. If we introduce “hat notation” with $\hat{y} = y/\bar{y}$, we get $\hat{y} = \left[\theta \hat{K}^{\rho} + (1-\theta)\hat{L}^{\rho}\right]^{1/\rho}$. Similar manipulations of the rest of the equations in the model delivers a means of computing counterfactuals in the CGE setting.

# What economic activities are “tradable”?

I’ve had a couple conversations with graduate students in recent months about classifying industries or occupations by their tradability, so here’s a blog post reviewing some of the relevant literature.

A number of papers emphasize predictions that differ for tradable and non-tradable activities. Perhaps the most famous is Atif Mian and Amir Sufi’s Econometrica article showing that counties with a larger decline in housing net worth experienced a larger decline in non-tradable employment.

Mian and Sufi define industries’ tradability by two different means, one yielding a discrete measure and the other continuous variation:

The first method defines retail- and restaurant-related industries as non-tradable, and industries that show up in global trade data as tradable. Our second method is based on the idea that industries that rely on national demand will tend to be geographically concentrated, while industries relying on local demand will be more uniformly distributed. An industry’s geographical concentration index across the country therefore serves as an index of “tradability.”

Inferring tradability is hard. Since surveys of domestic transactions like the Commodity Flow Survey don’t gather data on the services sector, measures like “average shipment distance by industry” (Table 5a of the 2012 CFS) are only available for manufacturing, mining, and agricultural industries. Antoine Gervais and Brad Jensen have also pursued the idea of using industries’ geography concentration to reveal their tradability, allowing them to compare the level of trade costs in manufacturing and services. One shortcoming of this strategy is that the geographic concentration of economic activity likely reflects both sectoral variation in tradability and sectoral variation in the strength of agglomeration forces. That may be one reason that Mian and Sufi discretize the concentration measure, categorizing “the top and bottom quartile of industries by geographical concentration as tradable and non-tradable, respectively.”

We might also want to speak to the tradability of various occupations. Ariel Burstein, Gordon Hanson, Lin Tian, and Jonathan Vogel’s recent paper on the labor-market consequences of immigration varying with occupations’ tradability is a nice example. They use “the Blinder and Krueger (2013) measure of `offshorability’, which is based on professional coders’ assessments of the ease with which each occupation could be offshored” (p.20). When they look at industries (Appendix G), they use an approach similar to that of Mian and Sufi.

Are there other measure of tradability in the literature?