Archive for the ‘Measures, Statistics & Technicalities’ Category

On “hat algebra”

7 May 2018

This post is about “hat algebra” in international trade theory. Non-economists won’t find it interesting.

What is “hat algebra”?

Alan Deardorff’s Glossary of International Economics defines “hat algebra” as

The Jones (1965) technique for comparative static analysis in trade models. Totally differentiating a model in logarithms of variables yields a linear system relating small proportional changes (denoted by carats (^), or “hats”) via elasticities and shares. (As published it used *, not ^, due to typographical constraints.)

The Jones and Neary (1980) handbook chapter calls it a circumflex, not a hat, when explaining its use in proving the Stolper-Samuelson theorem:

a given proportional change in commodity prices gives rise to a greater proportional change in factor prices, such that one factor price unambiguously rises and the other falls relative to both commodity prices… the changes in the unit cost and hence in the price of each commodity must be a weighted average of the changes in the two factor prices (where the weights are the distributive shares of the two factors in the sector concerned and a circumflex denotes a proportional change)… Since each commodity price change is bounded by the changes in both factor prices, the Stolper-Samuelson theorem follows immediately.

I’m not sure when “hat algebra” entered the lexicon, but by 1983 Brecher and Feenstra were writing “Eq. (20) may be obtained directly from the familiar ‘hat’ algebra of Jones (1965)”.

What is “exact hat algebra”?

Nowadays, trade economists utter the phrase “exact hat algebra” a lot. What do they mean? Dekle, Eaton, and Kortum (2008) describe a procedure:

Rather than estimating such a model in terms of levels, we specify the model in terms of changes from the current equilibrium. This approach allows us to calibrate the model from existing data on production and trade shares. We thereby finesse having to assemble proxies for bilateral resistance (for example, distance, common language, etc.) or inferring parameters of technology.

Here’s a simple example of the approach. Let’s do a trade-cost counterfactual in an Armington model with labor endowment L, productivity shifter \chi, trade costs \tau, and trade elasticity \epsilon. The endogenous variables are wage w, income Y = w \cdot L, and trade flows X_{ij}. The two relevant equations are the market-clearing condition and the gravity equation.

Suppose trade costs change from \tau_{ij} to \tau'_{ij}, a shock \hat{\tau}_{ij} \equiv \frac{\tau'_{ij}}{\tau_{ij}}. By assumption, \hat{\chi}=\hat{L}=1. We’ll solve for the endogenous variables \hat{\lambda}_{ij}, \hat{X}_{ij} and \hat{w}_{i}. Define “sales shares” by \gamma_{ij}\equiv\frac{X_{ij}}{Y_{i}}. Algebraic manipulations deliver a “hat form” of the market-clearing condition.

Similarly, let’s obtain “”hat form” of the gravity equation.

Combining equations (1.1) and (1.2) under the assumptions that \hat{Y}_{i}=\hat{X}_{i} and \hat{\chi}=\hat{L}=1, we obtain a system of equations characterizing an equilibrium \hat{w}_i as a function of trade-cost shocks \hat{\tau}_{ij}, initial equilibrium shares \lambda_{ij}, and \gamma_{ij}, and the trade elasticity \epsilon:

If we use data to pin down \epsilon, \lambda_{ij}, and \gamma_{ij}, then we can feed in trade-cost shocks \hat{\tau} and solve for \hat{w} to compute the predicted responses of \lambda'_{ij}.

Why is this “exact hat algebra”? When introducing material like that above, Costinot and Rodriguez-Clare (2014) say:

We refer to this approach popularized by Dekle et al. (2008) as “exact hat algebra.”… One can think of this approach as an “exact” version of Jones’s hat algebra for reasons that will be clear in a moment.

What is “calibrated share form”?

Dekle, Eaton, and Kortum (AERPP 2007, p.353-354; IMF Staff Papers 2008, p.522-527) derive the “exact hat algebra” results without reference to any prior work. Presumably, Dekle, Eaton, and Kortum independently derived their approach without realizing a connection to techniques used previously in the computable general equilibrium (CGE) literature. The CGE folks call it “calibrated share form”, as noted by Ralph Ossa and Dave Donaldson.

A 1995 note by Thomas Rutherford outlines the procedure:

In most large-scale applied general equilibrium models, we have many function parameters to specify with relative ly few observations. The conventional approach is to calibrate functional parameters to a single benchmark equilibrium… Calibration formulae for CES functions are messy and difficult to remember. Consequently, the specification of function coefficients is complicated and error-prone. For applied work using calibrated functions, it is much easier to use the “calibrated share form” of the CES function. In the calibrated form, the cost and demand functions explicitly incorporate

  • benchmark factor demands
  • benchmark factor prices
  • the elasticity of substitution
  • benchmark cost
  • benchmark output
  • benchmark value shares

Rutherford shows that the CES production function y(K,L) = \gamma \left(\alpha K^{\rho} + (1-\alpha)L^{\rho}\right)^{1/\rho} can be calibrated relative to a benchmark with output \bar{y}, capital \bar{K}, and labor \bar{L} as y = \bar{y} \left[\theta \left(\frac{K}{\bar{K}}\right)^{\rho} + (1-\theta)\left(\frac{L}{\bar{L}}\right)^{\rho}\right]^{1/\rho}, where \theta is the capital share of factor income. If we introduce “hat notation” with \hat{y} = y/\bar{y}, we get \hat{y} = \left[\theta \hat{K}^{\rho} + (1-\theta)\hat{L}^{\rho}\right]^{1/\rho}. Similar manipulations of the rest of the equations in the model delivers a means of computing counterfactuals in the CGE setting.

Advertisements

What economic activities are “tradable”?

6 January 2018

I’ve had a couple conversations with graduate students in recent months about classifying industries or occupations by their tradability, so here’s a blog post reviewing some of the relevant literature.

A number of papers emphasize predictions that differ for tradable and non-tradable activities. Perhaps the most famous is Atif Mian and Amir Sufi’s Econometrica article showing that counties with a larger decline in housing net worth experienced a larger decline in non-tradable employment.

Mian and Sufi define industries’ tradability by two different means, one yielding a discrete measure and the other continuous variation:

The first method defines retail- and restaurant-related industries as non-tradable, and industries that show up in global trade data as tradable. Our second method is based on the idea that industries that rely on national demand will tend to be geographically concentrated, while industries relying on local demand will be more uniformly distributed. An industry’s geographical concentration index across the country therefore serves as an index of “tradability.”

Inferring tradability is hard. Since surveys of domestic transactions like the Commodity Flow Survey don’t gather data on the services sector, measures like “average shipment distance by industry” (Table 5a of the 2012 CFS) are only available for manufacturing, mining, and agricultural industries. Antoine Gervais and Brad Jensen have also pursued the idea of using industries’ geography concentration to reveal their tradability, allowing them to compare the level of trade costs in manufacturing and services. One shortcoming of this strategy is that the geographic concentration of economic activity likely reflects both sectoral variation in tradability and sectoral variation in the strength of agglomeration forces. That may be one reason that Mian and Sufi discretize the concentration measure, categorizing “the top and bottom quartile of industries by geographical concentration as tradable and non-tradable, respectively.”

We might also want to speak to the tradability of various occupations. Ariel Burstein, Gordon Hanson, Lin Tian, and Jonathan Vogel’s recent paper on the labor-market consequences of immigration varying with occupations’ tradability is a nice example. They use “the Blinder and Krueger (2013) measure of `offshorability’, which is based on professional coders’ assessments of the ease with which each occupation could be offshored” (p.20). When they look at industries (Appendix G), they use an approach similar to that of Mian and Sufi.

Are there other measure of tradability in the literature?

Where are the jobs? Don’t look too closely

31 July 2015

Robert Manduca, a Harvard sociology PhD student, has put together a nice visualization of employment data that he titled “Where Are the Jobs?” It’s a great map, modeled after the very popular dot map of US residents by ethnicity. The underlying data come from the Longitudinal Employer-Household Dynamics (LEHD) program, which is a fantastic resource for economics researchers.

wherearethejobs

Since every job is represented by a distinct dot, it’s very tempting to zoom in and look at the micro detail of the employment geography. Vox’s Matt Yglesias explored the map by highlighting and contrasting places like Chicago and Silicon Valley. Emily Badger similarly marveled at the incredible detail.

Unfortunately, at this super-fine geographical resolution, some of the data-collection details start to matter. The LEHD is based on state unemployment insurance (UI) program records and therefore depends on how state offices reporting the data assign employees to business locations. When an employer operates multiple establishments (an establishment is “a single physical location where business transactions take place or services are performed”), state UI records don’t identify the establishment-level geography:

A primary objective of the QWI is to provide employment, job and worker flows, and wage measures at a very detailed levels of geography (place-of-work) and industry. The structure of the administrative data received by LEHD from state partners, however, poses a challenge to achieving this goal. QWI measures are primarily based on the processing of UI wage records which report, with the exception of Minnesota, only the employing employer (SEIN) of workers… However, approximately 30 to 40 percent of state-level employment is concentrated in employers that operate more than one establishment in that state. For these multi-unit employers, the SEIN on workers’ wage records identifies the employing employer in the ES-202 data, but not the employing establishment… In order to impute establishment-level characteristics to job histories of multi-unit employers, non-ignorable missing data model with multiple imputation was developed.

These are challenging data constraints. I have little idea how to evaluate the imputation procedures. These things are necessarily imperfect. Let me just mention one outlier as a way of illustrating some limitations of the data underlying the dots.

Census block 360470009001004 (that’s a FIPS code; “36” is New York “36047” is Kings County, and so forth) is in Brooklyn, between Court St and Adams St and between Livingston St and Joralemon St. The Borough Hall metro station is on the northern edge of the block. (Find it on the Census Block maps here). A glance at Google Maps shows that this block is home to the Brooklyn Municipal Building, Brooklyn Law School, and a couple other buildings.

brooklynbridge

360470009001004

What’s special about census block 360470009001004 is that it supposedly hosted 174,000 jobs in 2010, according to the LEHD Origin-Destination Employment Statistics (ny_wac_S000_JT01_2010.csv). This caught my eye because it’s the highest level in New York and really, really high. The other ten census blocks contained in the same census tract (36047000900) have less than 15,000 jobs collectively. This would be a startling geographic discontinuity in employment density. The census block with the second highest level of employment in the entire state of New York has only 48,431 employees.

A glance at the Brooklyn Municipal Building shows that it’s big, but it sure doesn’t make it look like a place with 174,000 employees.

municipalbuilding

And other data sources that do report employment levels by establishment (rather than state employer identification number) show that there aren’t 174,000 jobs on this block. County Business Patterns, a data set that is gathered at the establishment level, reports that total paid employment in March 2010 in ZIP code 11201, which contains this census block and many others,  was only 52,261. Looking at industries, the LODES data report that 171,000 of the block’s 174,000 jobs in 2010 were in NAICS sector 61 (educational services). Meanwhile, County Business Patterns shows only 28,117 paid employees in NAICS 61 for all of Brooklyn (Kings County) in 2010. I don’t know the details of how the state UI records were reported or the geographic assignments were imputed, but clearly many jobs are being assigned to this census block, far more than could plausibly be actually at this geographic location.

So you need to be careful when you zoom in. Robert Manduca’s map happens to not be too bad in this regard, because he limits the geographic resolution such that you can’t really get down to the block level. If you look carefully at the image at the top of this post and orient yourself using the second image, you can spot the cluster of “healthcare, education, and government” jobs on this block near Borough Hall just below Columbus Park and Cadman Plaza Park, which are jobless areas. But with 171,000 dots on such a tiny area, it’s totally saturated, and its nature as a massive outlier isn’t really visible. In more sparsely populated parts of the country, where census blocks are physically larger areas, these sorts of problems might be visually evident.

“Where Are The Jobs?” is an awesome mapping effort. It reveals lots of interesting information; it is indeed “fascinating” and contains “incredible detail“. We can learn a lot from it. The caveat is that the underlying data, like every other data source on earth, have some assumptions and shortcomings that make them imperfect when you look very, very closely.

P.S. That second-highest-employment block in New York state? It’s 360470011001002, across the street from the block in question. With 45,199 jobs in NAICS sector 48-49, Transportation and Warehousing. But all of Kings County reported only 18,228 employees in NAICS 48 in 2010 in the County Business Patterns data.

How not to estimate an elasticity

29 June 2014

The Cato Institute’s Randal O’Toole claims to debunk a recent paper suggesting a “fundamental of road congestion”.

In support of the induced-demand claim, Mann cites research by economists Matthew Turner of the University of Toronto and Gilles Duranton of the University of Pennsylvania. “We found that there’s this perfect one-to-one relationship,” Mann quotes Turner as saying. Mann describes this relationship as, “If a city had increased its road capacity by 10 percent between 1980 and 1990, then the amount of driving in that city went up by 10 percent. If the amount of roads in the same city then went up by 11 percent between 1990 and 2000, the total number of miles driven also went up by 11 percent. It’s like the two figures were moving in perfect lockstep, changing at the same exact rate.” If this were true, then building more roads doesn’t make traffic worse, as the Wired headline claims; it just won’t make it any better.

However, this is simply not true. Nor is it what Duranton & Turner’s paper actually said. The paper compared daily kilometers of interstate highway driving with lane kilometers of interstates in the urbanized portions of 228 metropolitan areas. In the average metropolitan area, it found that between 1983 and 1993 lane miles grew by 32 percent while driving grew by 77 percent. Between 1993 and 2003, lane miles grew by 18 percent, and driving grew by 46 percent.

That’s hardly a “perfect one-to-one relationship.”

The paper also calculated the elasticities of driving in relationship to lane kilometers. An elasticity of 2 would mean a 10 percent increase in lane miles would correspond with a 20 percent growth in driving; an elasticity of 1 would mean that lane miles and driving would track closely together. The paper found that elasticities were very close to 1 with standard errors of around 0.05. Even though this is contradicted by the previously cited data showing that driving grew much faster than lane miles, this is the source of Turner’s “perfect one-to-one relationship.”

My prior belief is that results published in the American Economic Review are unlikely to be debunked by a couple of paragraphs in a blog post. In this case, it’s fairly straightforward to explain why the average growth rates of lane kilometers and vehicle-kilometers traveled are not informative about the elasticity.

The lane-kilometer elasticity of VKT describes how changes in VKT relate to changes in lane kilometers. O’Toole tries to say something about this relationship by noting the average value of each. But describing the average growth rates does not say whether cities that experienced faster growth in lane kilometers also experienced faster growth in vehicle-kilometers traveled. It’s entirely possible for both averages to be positive and the elasticity relating them to be negative! Here are a few lines of Stata code to generate an example in which the averages are 32% and 77%, while the elasticity is -1.

clear
set obs 228
gen delta_lane = .32 + rnormal(0,.2)
gen delta_VKT = (.77 +.32) - delta_lane + rnormal(0,.2)
twoway (scatter delta_VKT delta_lane) (lfit delta_VKT delta_lane), graphregion(color(white))

That yields a figure like this:

otoole1

Having made this econometric point, one can grab the data used in the Duranton and Turner paper to  note the average values and appropriately estimate the elasticity, revealing no contradiction whatsoever between these two moments.

use "Duranton_Turner_AER_2010.dta", clear
gen delta_VKT = log(vmt_IHU_93) - log(vmt_IHU_83)
gen delta_lane = log(ln_km_IHU_93) - log(ln_km_IHU_83)
summ delta*
reg delta_VKT delta_lane
twoway (scatter delta_VKT delta_lane) (lfit delta_VKT delta_lane), graphregion(color(white))

otoole2

Across MSAs, the average VKT change was a 61 log-point increase, while the average lane kilometers change was a 25 log-point increase. That’s a ratio greater than two, but the estimated elasticity is 0.955. Hence Matt saying that he and Gilles found a one-to-one relationship. Their paper deals with various types of roads and instrumenting to infer the causal relationship, but I don’t need to describe those issues here. I’ve written enough to demonstrate why O’Toole’s blog post does not debunk the Duranton-Turner findings.

“Ricardian Productivity Differences and the Gains from Trade”

18 November 2013

You’ll recall that Ralph Ossa emphasized sectoral heterogeneity in trade elasticities as one reason the ACR formula might understate the gains from trade. I haven’t read it yet, but this new NBER WP by Andrei Levchenko and Jing Zhang also emphasizes the importance of sectoral heterogeneity in thinking about this topic:

[T]he simpler formulas that do not use information on sectoral trade volumes understate the true gains from trade dramatically, often by more than two-thirds. The error in the formulas across countries is strongly negatively correlated to the strength of Ricardian comparative advantage: the one-sector formula-implied gains understate the true gains from trade by more in countries with greater dispersion in sectoral productivity. The model-based exercise thus reinforces the main result of the paper that accounting for sectoral heterogeneity in productivity is essential for a reliable assessment of the gains from trade.

Melitz and Redding on heterogeneous firms and gains from trade

6 June 2013

In a recent VoxEU column, Marc Melitz and Stephen Redding describe the logic of Melitz (Ecma, 2003) and Arkolakis, Costinot, and Rodriguez-Clare (AER, 2012). Those should be familiar to Trade Diversion readers (e.g. ACR 2010 wp, Ossa 2012 wp). They then explain their new paper:

In Melitz and Redding (2013b), we show that firm-level responses to trade that generate higher productivity do in fact represent a new source of gains from trade.

  • We start with a model with heterogeneous firms, then compare it to a variant where we eliminate firm differences in productivity while keeping overall industry productivity constant.

We also keep all other model parameters (such as those governing trade costs and demand conditions) constant.

  • This ‘straw man’ model has no reallocations across firms as a result of trade and hence features no productivity response to trade.

Yet it is constructed so as to deliver the same welfare prior to trade liberalisation. We then show that, for any given reduction in trade costs, the model with firm heterogeneity generates higher aggregate welfare gains from trade because it features an additional adjustment margin (the productivity response to trade via reallocations). We also show that these differences are quantitatively substantial, representing up to a few percentage points of GDP. We thus conclude that firm-level responses to trade and the associated productivity changes have important consequences for the aggregate welfare gains from trade.

How can these findings be reconciled with the results obtained by Arkolakis, Costinot, and Rodriguez-Clare (2012)? Their approach compares models that are calibrated to deliver the same domestic trade share and trade elasticity (the sensitivity of aggregate trade to changes in trade costs). In so doing, this approach implicitly makes different assumptions about demand and trade costs conditions across the models that are under comparison (Simonovska and Waugh 2012). By assuming different levels of product differentiation across the models, and assuming different levels of trade costs, it is possible to have the different models predict the same gains from trade – even though they feature different firm-level responses. In contrast, our approach keeps all these ‘structural’ demand and cost conditions constant, and changes only the degree of firm heterogeneity (Melitz and Redding 2013b). This leads to different predictions for the welfare gains from trade.

One potential criticism of our approach is that one can estimate the trade elasticity (the sensitivity of aggregate trade to changes in trade costs) using aggregate trade data only – without requiring any specific assumptions about the firm-level responses to trade. Whatever assumptions are made about those firm-level responses (and the demand and trade-cost conditions), they should then be constructed so as to match that estimated aggregate elasticity. However, recent empirical work has shown that those underlying assumptions radically affect the measurement of the aggregate trade elasticity, and that this trade elasticity varies widely across sectors, countries, and the nature of the change in trade costs (see for example Helpman et al. 2008, Ossa 2012, and Simonovska and Waugh 2012). There is thus no single empirical trade-elasticity parameter that can be held constant across those different models.

Given the lack of a touchstone set of elasticities, we favour our approach to measuring the gains from trade arising from different models; one that maintains the same assumptions about demand and trade costs conditions across those models.

“Large cities” in the EU and US, redux

12 October 2012

The Economist is six months late to the party, but the latest print edition has a piece on that McKinsey comparison of American and Europe cities. I have some quibbles, again.

I don’t understand the piece’s opening, though it has little to do with what follows. It begins:

AMERICA is full of vast, empty spaces. Europe, by contrast, seems chock-a-block with humanity, its history shaped by a lack of continental elbowroom. Ironically, Europe’s congestion partly reflects the fact that its large cities suck up relatively few people.

Moving people across cities wouldn’t change the (unweighted) average population density of the US or EU, so what does this comparison mean? Europe is going to be full of humanity because the land area of the EU is roughly half that of the continguous US (1.7m vs 3.1m square miles). Since larger cities are generally denser, the population-weighted density of Europe would rise if its large cities had higher population shares.

Never mind the elbowroom. The Economist continues:

Although America and the euro zone have similar total populations, America’s 50 largest metropolitan areas are home to 164m people, compared with just 102m in the euro area. This striking disparity has big consequences.

Differences in metropolitan populations may help explain gaps in productivity and incomes. Western Europe’s per-person GDP is 72% of America’s, on a purchasing-power-parity basis. A recent study by the McKinsey Global Institute, the consultancy’s research arm, reckons that some three-quarters of this gap can be chalked up to Europe’s relatively diminutive cities. More Americans than Europeans live in big cities: there is a particular divergence in the size of each region’s “middleweight” cities, those that teem just a little less than the likes of New York and Paris (see chart). And the premium earned by Americans in large cities relative to those in the countryside is larger than that earned by urban Europeans.

As I explained back in April, the MGI report does not say Europeans would reach American prosperity levels if the population shares of their large cities reached American levels:

The gap in per capita GDP between the US and Europe is about 35%, according to the MGI figures in Exhibit 2. The “large city” premia in the United States and Europe of 34% and 30% are virtually the same. That means that the difference in per capita income attributable to the difference in “large city” population shares is the large city premium (~30pp) times the difference in large city population shares (22pp). The six to seven percentage points explained by this difference in population shares is at best one-fifth of the 35% gap between US and EU incomes. You can confirm this quick calculation by studying the decomposition in MGI’s Exhibit 2. Moving more people into large cities wouldn’t meaningfully reduce the US-EU per capita income gap.

Look at Exhibit 2 for yourself:

The Economist mentions the big-city population share and big-city premium components. They neglect that 53 of those 74 percentage points are strictly attributable to the difference in average income. Differences in metropolitan populations are not at the heart of the story.

After citing all the advantages of cities, the Economist considers two reasons why European cities aren’t as large as US cities: regulatory barriers and incomplete integration. While the former might matter, I put a lot of stock in the latter. As I explained in my prior post, Zipf’s law holds at the country level. Since no European state has a population close to 300 million, we should not expect any European city to approach the size of NYC or LA. Until intra-European mobility looks anything like intra-US mobility, I think we should expect Zipf’s law to hold at the country level. And since MGI used a common cutoff of population > 150,000 for defining a “large city”, it’s not at all surprising that a larger share of the US population lives in its large cities. I wrote before:

Given the UK population, increasing the fraction of UK residents who live in “large cities” with populations greater than 150,000 would require the emptying out of smaller metropolitan areas. While such migration is entirely possible, it would violate the expected city size distribution… If you know the populations of New York and London and are familiar with Zipf’s law, then it’s not at all surprising that a greater fraction of the US population is found in metropolitan areas above some common population threshold. I don’t think that tells us much about the economic mechanisms determining the role of US cities in the global economy.

Update: Related to my comparison of US and UK city-size distributions, see Henry Overman on the details of Zipf’s law for UK cities.

Atkin & Donaldson – Who’s Getting Globalized? The Size and Nature of Intranational Trade Costs

10 July 2012

David Atkin and Dave Donaldson are presenting this paper tomorrow afternoon at the NBER summer institute:

This paper uses a newly collected dataset on the prices of narrowly defined goods  across many dispersed locations within multiple developing countries to address the  question, How large are the costs that separate households in developing countries from the  global economy? Guided by a flexible model of oligopolistic intermediation with variable mark-ups, our analysis proceeds in four steps. first, we measure total intranational trade costs (ie marginal costs of trading plus mark-ups on trading) using price gaps over space within countries—but we do so only among pairs of locations that  are actually trading a good by drawing on unique data on the location of production  of each good. Second, we estimate, separately by location and commodity, the passthrough rate between the price at the location of production and the prices paid by inland consumers of the good. Our estimates imply that incomplete pass-through—and therefore, intermediaries’ market power—is a commonplace, and that pass-through is especially low in remote locations. Third, we argue that our estimates of total trade costs (Step 1) and pass-through rates (Step 2) are sufficient to infer the primitive effect  of distance on the marginal costs of trading; after correcting for the fact that mark-ups  vary systematically across space we find that marginal costs are affected by distance  more strongly than typically estimated. finally, we show that, in our model, the estimated pass-through rate (Step 2) is a sufficient statistic to identify the shares of social  surplus (ie the gains from trade) accruing to inland consumers, oligopolistic intermediaries, and deadweight loss; applying this result we find that intermediaries in  remote locations capture a considerable share of the surplus created by intranational  trade.

You can listen to a podcast of Donaldson presenting a much earlier version of this work from the International Growth Centre. He does a really nice job of summarizing the issues involved in inferring trade costs from price data.

How big are the gains from trade?

28 May 2012

One of the most-mentioned trade papers of the last couple years is “New Trade Models, Same Old Gains?” by Arkolakis, Costinot & Rodriguez-Clare, now published in the AER. Their theoretical work shows that, for a broad class of theoretical models that includes the Armington, Eaton and Kortum (2002), and Melitz-Chaney approaches, the gains from trade are characterized by a formula involving only two numbers – the domestic expenditure share and the trade elasticity. The former can be straightforwardly obtained from the data. The latter needs to be estimated, which is more involved but feasible. ACR shows that their formula says that US welfare is about 1% higher than it would be under autarky.

In the words of Ralph Ossa, “either the gains from trade are small for most countries or the workhorse models of trade fail to adequately capture those gains.” Different people come down on different sides of that choice. Ed Prescott, for example, is clearly in the latter camp.

Ossa has a new paper, “Why Trade Matters After All“, aimed at reconciling this divide:

I show that accounting for cross-industry variation in trade elasticities greatly magnifies the estimated gains from trade. The main idea is as simple as it is general: While imports in the average industry do not matter too much, imports in some industries are critical to the functioning of the economy, so that a complete shutdown of international trade is very costly overall…

I develop a multi-industry Armington (1969) model of international trade featuring nontraded goods and intermediate goods and show what it implies for the measurement of the gains from trade…

Loosely speaking, the exponent of the aggregate formula is therefore the inverse of the average of the trade elasticities whereas the exponent of the industry-level formula is the average of the inverse of the trade elasticities which is different as long as the elasticities vary across industries.

allowing for cross-industry heterogeneity in the trade elasticities substantially increases the estimated gains from trade for all countries in the sample. For example, the estimated gains from trade of the US increase from 6.4 percent to 42.0 percent if I do not adjust for nontraded goods and intermediate goods and from 3.8 percent to 23.5 percent if I do…

the 10 percent most important industries account for more than 80 percent of the log gains from trade on average.

Thinking about the firm-size distribution

27 May 2012

[Note: This post isn’t about international economics. I’ll use an example from trade to comment on a feature of the US real-estate market.]

In a letter to the Economist, the president of the National Association of Realtors writes:

[I]t is not true that large brokers dominate the industry. In fact, the real-estate industry consists mostly of independent contractors and small firms. Eight out of ten realtors work as independent contractors for their firms.

The second sentence appears to be a non-sequitur, unless one thinks that existence is informative about dominance. It’s not. In their first glance, antitrust authorities would look at concentration ratios or Herfindahl–Hirschman indices, because dominance is about economic outcomes, such as market shares, not mere existence.

According to Bernard, Jensen, Redding, and Schott’s JEP survey, four percent of the 5.5 million US firms export. That makes 220,000 exporters. The top ten percent, just 22,000 exporters, are responsible for 96% of US exports. Would we say that “larger exporters do not dominate exporting because the exporting set of firms consists mostly of small exporters”? Of course not.

When thinking about the sales distribution, we care about the exponent of the power law characterizing it, not merely the fact that its support includes small sizes.