Things I enjoyed learning about in November 2017

Contents

  1. A picture of the Alps from the Pyrenees
  2. Metaphors are just reversed motte-and-baileys
  3. Why can’t we just buy out the cronies?
  4. Wherefore variable assignments?
  5. Capitalism is very, very good at making potato chips
  6. Why hasn’t evolution invented the wheel?
    1. Biological barriers to wheeled organisms
    2. Disadvantages of wheels
  7. “The 99-percent CIs of 2SLS estimates include the OLS point estimate over 90% of the time”
  8. Economist reads 147 development papers and summarises them in one sentence each
  9. Glennerster on different models for scaling interventions
  10. Another Uber paper
  11. Tim Harford on fifty inventions that shaped the modern economy
    1. The elevator
    2. The index fund
    3. Epilogue: Light
  12. Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities

A picture of the Alps from the Pyrenees

Made possible despite the curvature of the earth due to atmopsheric refraction !

More explanation at the source.

Metaphors are just reversed motte-and-baileys

From FHI’s Ben Garfinkel:

a generalization of all these concepts:

Blurry sentence: a sentence with at least two possible interpretations, where one is both much more interesting and much less plausible than the other. The effect is to make the reader feel that the sentence is both interesting and plausible. The implausible interpretation also does not help the reader to understand the plausible one.

Here’s how the generalization works:

Basic metaphor: a blurry sentence where the plausible interpretation is the most salient one to the reader.

*Deepity: **a blurry sentence where neither the plausible interpretation nor the implausible interpretation is much more salient than the other, leaving the reader with a sense of vagueness or ineffability. The feeling is: *There is something true and interesting here, although I can’t quite put my finger on it.

Motte and bailey: a blurry sentence where the implausible interpretation is the most salient one. If the reader objects that the sentence is in fact implausible, then the writer has the option of switching to only the plausible interpretation.

These revised definitions reveal that there is a spectrum here, such that the names “basic metaphor,” “deepity,” and “motte and bailey” are only calling out particular regions of the spectrum.

Why can’t we just buy out the cronies?

A puzzle in political science: free markets tend to be more efficient than kleptocracies. But although society as a whole stands to gain from moving to freer markets, those who hold power in the kleptocracy do not. But, now, there are gains from trade! Suppose you “buy out” the keptokrats, by paying them more than what they currently gain through exploitation, in exchange for giving up their privileges. That would be a pareto improvement. So why doesn’t it happen?

Barry Weingast offers the following hypothesis on EconTalk: the exchange is not credible.

[21:40] Part of it is the difficulty of buying people out. If [this closed system] gives landowners a degree of privilege: wouldn’t we be better off in a [more open system]? And in order to get there, we’re gonna buy you out of your privilege?

The problem, I think, is how do you sustain that deal? Once that deal becomes public, the issue is: why should we buy them out? Yes, it’s right to get rid of their privileges, but why, by virtue of them being privileged, why should we buy them out? And the risk, therefore, is that the exchange is not credible, that in fact, they lose their privileges without being compensated. I think this a problem in the US today, with respect to water rights.

Wherefore variable assignments?

Logic for Philosophy by Ted Sider (LfP), defines a PC-model as and treats variable assignments as not part of a model (LfP 92f).

There are two things one might want from whatever one calls a “model”:

  • the model contains all the information you need to determine the truth-value of any sentence in that model. In this case one should call a model as Sider does.
  • the model contains all the information you need to determine the truth-value of any formula (open or closed) in that model. In this case one should call the triple a model.

As far as I can tell, this is a merely terminological choice without further significance.

Now: we are almost always only interested in sentences, not open formulas. In that case why bother with the concept of variable assignment at all? It seems inelegant and confusing to define this object which has no impact on the truth-values of any sentences. We could define the -clauses of a valuation-function for PC-model as:

  • For any one-place predicate and any variable , iff for all ,
  • For any two-place predicate , and any variables and , iff for all , and for all ,

And this can easily be generalised to any sentence of the form .

But what about sentences of the form ? We can also define their valuation:

  • For any one-place predicate and any variable , iff for all , ( and ).

Now we can see where this is going. To define , we will need a different clause for every form that could take.

For example, if is of the form , the clause will be quite long. And it’s only one of infinitely many!

The only way to avoid this definition with infinitely many clauses is by defining recursively in terms of . But since is an open formula, only makes sense relative to a variable assignment. We need to either define models as triples or define a doubly relativised valuation function .

Capitalism is very, very good at making potato chips

This hour-and-a-half episode of EconTalk is an amazingly deep look at how modern producers and retailers of salty snacks make sure the shelves are stocked and their products get noticed. The host, Russ Roberts, is a free-market devotee, and his enthusiasm is contagious in this conversation.

Three choice quotes:

All food is a perishable product, especially when you use natural ingredients. The film reduced the amount of staling that occurred. The nature of the bag. It used to be made out of paper. So, you’d buy a bag of chips or you might even have the store make you the bag of chips right there; and then a day later it’s gone stale. So, the technology of the actual film to preserve freshness is significant. On top of that the film’s kind of interesting because you can’t really take it to altitude, so certain places we have to have plants. Like, the Rockies are places we can’t transport to, because the bag will pop. The pressure changes. If you’ve ever traveled to altitude with snacks you’ll see this happen. The bag kind of inflates. We figured out how to overcome those difficulties by keeping the right partial pressures inside of the bag, etc.

What happens is as the potatoes make their way out of that tumbler, where the temperature of the tumbler is hotter, that’s where they get the kind of crispiness they get; and then they get funneled into a sifter; and they come out and get put onto a conveyor belt. That conveyor belt is moving between 45 and 60 miles an hour at that point. Fast. If you saw that you would note it. They’re flying. You wouldn’t want to reach in and grab. They then proceed to a photography station, where overhead there is a picture that gets taken of a section of the conveyor belt. It’s all measured beforehand, so the machine and the photography equipment know every kind of piece of what the conveyor belt looks like. It then maps where the various chips are on there; and then they undergo a kind of quality control algorithm. So, they go through and say: Hey, this chip is within tolerances of what it should look like. So there’s size, how cooked it is, and whether it’s got the right amount of flavor on it. Texture. Shape. So if they are kind of overly curled on themselves. There are different tolerances for different kinds of things. You can change it; our manufacturing folks can go in there and say: Hey, we want to produce chips that look a little bit more like this; get rid of all the other ones. And that would just be a matter of sifting, because we produce enough chips that you can produce different textures that come out of there. Same thing with like a Tostito–to get the scoop shape it goes through a little press where it prints the thing and then it comes through. Now there’s a gap in the conveyor belt. As the chip goes over that gap, it will literally jump the gap. Because it’s moving 60 mph. It has enough speed and the wind resistance on the chip.

We actually ran into this effect over the Fourth of July holiday. We had made some decisions about what we were able to produce and put up there, and we noticed we had one particular flavor of Lay’s Honey Barbecue that ended up going out of stock. […] We went and reviewed all of our schematics–the plans we use to merchandise stuff on the shelves. The visual layout of what goes where. It’s a map. We looked at it and said: How do we make sure that doesn’t happen again for that particular store. That’s called precision merchandising, making a very customized map for one particular space. It was a learning experience. We don’t ever want to be out of stock on anything. But given certain things, we’d rather be out of the Honey Barbecue Lay’s than out of Lay’s Classic. Because there’s a lot more people who are going to be angry at you for that than the one or two people. Other people might settle for Honey Lay’s or Barbecue Lay’s.

Why hasn’t evolution invented the wheel?

An surprisingly excellent Wikipedia article.

Given the ubiquity of the wheel in human technology, and the existence of biological analogues of many other technologies (such as wings and lenses), the lack of wheels in the natural world would seem to demand explanation—and the phenomenon is broadly explained by two main factors. First, there are several developmental and evolutionary obstacles to the advent of a wheel by natural selection, addressing the question “Why can’t life evolve wheels?” Secondly, wheels are often at a competitive disadvantage when compared with other means of propulsion (such as walking, running, or slithering) in natural environments, addressing the question “If wheels could evolve, why might they be rare nonetheless?” This environment-specific disadvantage also explains why at least one historical civilization abandoned the wheel as a mode of transport. […]

Biological barriers to wheeled organisms

Richard Dawkins describes the matter: “The wheel may be one of those cases where the engineering solution can be seen in plain view, yet be unattainable in evolution because it lies [on] the other side of a deep valley, cutting unbridgeably across the massif of Mount Improbable.” In such a fitness landscape, wheels might sit on a highly favorable “peak”, but the valley around that peak may be too deep or wide for the gene pool to migrate across by genetic drift or natural selection. […]

The greatest anatomical impediment to wheeled multicellular organisms is the interface between the static and rotating components of the wheel. In either a passive or driven case, the wheel (and possibly axle) must be able to rotate freely relative to the rest of the machine or organism.[Note 2] Unlike animal joints, which have a limited range of motion, a wheel must be able to rotate through an arbitrary angle without ever needing to be “unwound”. As such, a wheel cannot be permanently attached to the axle or shaft about which it rotates (or, if the axle and wheel are fixed together, the axle cannot be affixed to the rest of the machine or organism). There are several functional problems created by this requirement. […]

In animals, motion is typically achieved by the use of skeletal muscles, which derive their energy from the metabolism of nutrients from food. Because these muscles are attached to both of the components that must move relative to each other, they are not capable of directly driving a wheel. […]

Another potential problem that arises at the interface between wheel and axle (or axle and body) is the limited ability of an organism to transfer materials across this interface. If the tissues that make up a wheel are living, they will need to be supplied with oxygen and nutrients and have wastes removed to sustain metabolism. […]

Disadvantages of wheels

Wheels incur mechanical and other disadvantages in certain environments and situations that would represent a decreased fitness when compared with limbed locomotion. These disadvantages suggest that, even barring the biological constraints discussed above, the absence of wheels in multicellular life may not be the “missed opportunity” of biology that it first seems. In fact, given the mechanical disadvantages and restricted usefulness of wheels when compared with limbs, the central question can be reversed: not “Why does nature not produce wheels?”, but rather, “Why do human vehicles not make more use of limbs?” The use of wheels rather than limbs in many engineered vehicles can likely be attributed to the complexity of design required to construct and control limbs, rather than to a consistent functional advantage of wheels over limbs. […]

Although stiff wheels are more energy efficient than other means of locomotion when traveling over hard, level terrain (such as paved roads), wheels are not especially efficient on soft terrain such as soil, because they are vulnerable to rolling resistance. In rolling resistance, a vehicle loses energy to the deformation of its wheels and the surface on which they are rolling. […]

When moving through a fluid, rotating systems carry an efficiency advantage only at extremely low Reynolds numbers (i.e. viscosity-dominated flows) such as those experienced by bacterial flagella, whereas oscillating systems have the advantage at higher (inertia-dominated) Reynolds numbers Whereas ship propellers typically have efficiencies around 60% and aircraft propellers up to around 80% (achieving 88% in the human-powered Gossamer Condor), much higher efficiencies, in the range of 96%–98%, can be achieved with an oscillating flexible foil like a fish tail or bird wing.

Wheels are prone to slipping—an inability to generate traction—on loose or slippery terrain. Slipping wastes energy and can potentially lead to a loss of control or becoming stuck, as with an automobile on mud or snow. This limitation of wheels can be seen in the realm of human technology: in an example of biologically inspired engineering, legged vehicles find use in the logging industry, where they allow access to terrain too challenging for wheeled vehicles to navigate.

“The 99-percent CIs of 2SLS estimates include the OLS point estimate over 90% of the time”

Marc Bellamare writes:

For a while now, I have been thinking that with the Credibility Revolution having brought the focus of applied micro back to getting causal (unbiased) estimates, the next logical step–the Second Credibility Revolution, so to speak–should be for the literature to focus on getting the standard errors right. […]

The 99-percent confidence intervals (CIs) of those 2SLS estimates include the OLS point estimate over 90 of the time. They include the full OLS 99-percent CI over 75 percent of the time.

Economist reads 147 development papers and summarises them in one sentence each

Over at Development Impact:

  • “By decreasing bribes, our intervention [in the Kyrgyz Republic] reduces the average cost for firms and the price they charge to consumers. Revenues from patents increase significantly.” (Amodio et al.) #RCT
  • “Randomized anti-corruption audits [in Brazil] lead to higher economic activity and a reallocation of resources across firms…. Anti-corruption audits improve the performance of firms directly involved in corruption, reduce within firm misallocation of capital and labor, and allow firms to expand to new markets.” (Colonneli & Prem)
  • “Being a political supporter of the party in power increases the probability of having a public sector job by 10.5 percentage points (a 47% increase).” (Colonneli, Teso, & Prem)

Glennerster on different models for scaling interventions

At her blog

So, is this the recipe for moving from innovation to testing to scale: Evaluate a program, work with an organization that can scale, replicate in multiple contexts, and scale to improve millions of lives?

That model can work. It was appropriate for the graduation program, which is expensive and thus required a higher burden of evidence of effectiveness before scaling. The graduation program is also very complex implying the results may well vary by implementer, so it is worth testing with several implementers and scaling with those implementers who saw success.

But we shouldn’t conclude that this is the only model for getting to scale. In the rest of this post I will discuss examples of the “innovate, test, scale” approach in which there was no need for replication in multiple contexts; others where the testing was (appropriately) done with a different type of organization than that which scaled it; and finally, examples of evidence impacting millions of lives when it was not a program that was tested, but a theory.

Another Uber paper

Marginal Revolution:

Sorry, but the laws and supply and demand cannot be so easily ignored. If Uber holds fares constant, the higher net wage (tips plus fares) will attract more drivers but as the number of drivers increases their probability of finding a rider will fall. The drivers will earn more when driving but spend less time driving and more time idling. In other words, tipping will increase the “driving wage,” but reduce paid driving-time until the net hourly wage is pushed back down to the market wage.

The paper:

we estimate the effects of sudden fare changes on market outcomes, focusing on the supply-side. We explore both the short-run dynamics of market adjustment, as well as the eventual long-run equilibrium. We find that the driver hourly earnings rate—essentially the market equilibrium wage—moves immediately in the same direction as a fare change, but that these effects are short-lived. The prevailing wage returns to its pre-change level within about 8 weeks. This return is achieved primarily through permanent changes in driver “utilization,” or the fraction of hours-worked that are spent transporting passengers. Our results imply that the driver supply of labor to ride-sharing markets is highly elastic, most likely because drivers face no quantity restrictions on how many hours to supply and new drivers face minimal barriers to entry.

By default, I’m highly sceptical of the exogeneity of the “sudden fare changes”. The authors argue (with my thoughts in brackets)

There are several justifications for treating fare change variation as exogenous. First, we first note that as every city in the panel had fare changes, it is not the case that latent differences exist between the kinds of cities that have fare changes and those that do not. [Basically I’m unmoved by this.] Second, Figure 1 shows that many changes took place in numerous cities nearly simultaneously, making it clear that highly city-specific explanations were not driving many of the fare changes. [But they could be driven by nation-wide effects…] Note that in both 2015 and 2016, many cities experienced large cuts in the base fare right after the start of the new year. There are several cases in the data where cities all have fare changes within a fairly small window, but the precise timing differs by only a few weeks—it seems unlikely that the precise sequence of cuts reflects important latent differences. [This seems a lot more convincing] Finally, leaks by various media sources indicate that a relatively simple spreadsheet-based analysis was used to model city outcomes, making it doubtful that decision-makers were confidently conditioning on future potential outcomes [Seems quite speculative, we don’t know how the decision was actually made.] To the extent cities were selected for fare changes on the basis of observable attributes, we know approximately what those attributes are, and we can look for pre-treatment trends in those outcomes. In addition to checking for evidence of selection, our UberBlack within-city research design is not subject to the same concern that latent city-specific factors were the cause of fare changes, as is the case with the between-city design. […] Despite their differences, our findings from the between- and within-city analyses are reassuringly similar.

Tim Harford on fifty inventions that shaped the modern economy

Quotes from his new book.

The elevator

One day, on her way to work, a woman decides that she’s going to take a mass-transit system instead of her usual method. Just before she gets on board, she looks at an app on her phone that gives her position with the exact latitude and longitude. The journey is smooth and perfectly satisfactory, despite frequent stops, and when the woman disembarks she checks her phone again. Her latitude and longitude haven’t changed at all. What’s going on? The answer: this lady works in a tall office building, and rather than taking the stairs, she’s taken the lift. We don’t tend to think of elevators as mass-transportation systems, but they are: they move hundreds of millions of people every day, and China alone is installing 700,000 elevators a year. The tallest building in the world, the Burj Khalifa in Dubai, has more than 300,000 square metres of floor space; the brilliantly engineered Sears Tower in Chicago has more than 400,000. Imagine such skyscrapers sliced into fifty or sixty low-rise chunks, then surrounding each chunk with a car park and connecting all the car parks together with roads, and you’d have an office park the size of a small town. The fact that so many people can work together in huge buildings on compact sites is possible only because of the elevator.

The index fund

Index funds now seem completely natural – part of the very language of investing. But as recently as 1976, they didn’t exist. Before you can have an index fund, you need an index. In 1884, a financial journalist called Charles Dow had the bright idea that he could take the price of some famous company stocks and average them, then publish the average going up and down. He ended up founding not only the Dow Jones company, but also the Wall Street Journal. […]

And Samuelson went further: he said that since professional investors didn’t seem to be able to beat the market, somebody should set up an index fund – a way for ordinary people to invest in the performance of the stock market as a whole, without paying a fortune in fees for fancy professional fund managers to try, and fail, to be clever. At this point, something interesting happened: a practical businessman actually paid attention to what an academic economist had written. John Bogle had just founded a company called Vanguard, whose mission was to provide simple mutual funds for ordinary investors – no fuss, no fancy stuff, low fees. And what could be simpler and cheaper than an index fund – as recommended by the world’s most respected economist? And so Bogle decided he was going to make Paul Samuelson’s wish come true. He set up the world’s first index fund, and waited for investors to rush in.

Epilogue: Light

Back in the mid-1990s the economist William Nordhaus conducted a series of simple experiments. One day, for example, he used a prehistoric technology: he lit a wood fire. Humans have been gathering and chopping and burning wood for tens of thousands of years. But Nordhaus also had a piece of high-tech equipment with him: a Minolta light meter. He burned 20 pounds of wood, kept track of how long it burned for and carefully recorded the dim, flickering firelight with his meter. Another day, Nordhaus bought a Roman oil lamp – a genuine antique, he was assured – fitted it with a wick, and filled it with cold-pressed sesame oil. He lit the lamp and watched the oil burn down, again using the light meter to measure its soft, even glow. Nordhaus’s open wood fire had burned for just three hours when fuelled with 20 pounds of wood. But a mere eggcup of oil burned all day, and more brightly and controllably. […]

To see why [keeping track of inflation] is difficult, consider the price of travelling from – say – Lisbon in Portugal to Luanda in Angola. When the journey was first made by Portuguese explorers, it would have been an epic expedition, taking months. Later, by steam ship, it would have taken a few days. Then by plane, a few hours. An economic historian who wanted to measure inflation could start by tracking the price of passage on the steamer. But then, once an air route opens up, which price do you look at? Perhaps you simply switch to the airline ticket price once more people start flying than sailing. But flying is a different service – faster, more convenient. If more travellers are willing to pay twice as much to fly, it hardly makes sense for inflation statistics to record that the cost of the journey has suddenly doubled. How, then, do we measure inflation when what we’re able to buy changes so radically over time? […]

Because we don’t have a good way to compare an iPod today to a gramophone a century ago, we don’t really have a good way to quantify how much all the inventions described in this book have really expanded the choices available to us. We probably never will. But we can try – and Bill Nordhaus was trying as he fooled around with wood fires, antique oil lamps and Minolta light meters. He wanted to unbundle the cost of a single quality that humans have cared deeply about since time immemorial, using the state-of-the-art technology of different ages: illumination. That’s measured in lumens, or lumen-hours. A candle, for example, gives off 13 lumens while it burns; a typical modern light bulb is almost a hundred times as bright as that. […]

Switch off a light bulb for an hour and you’re saving illumination that would have cost our ancestors all week to create. It would have taken Benjamin Franklin’s contemporaries all afternoon. But someone in a rich industrial economy today could earn the money to buy that illumination in a fraction of a second.

Here’s Nordhaus’ paper, “Do Real-Output and Real-Wage Measures Capture Reality?

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities

The paper is here, and has also been picked up by the Washington Post.

Basically they had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarizedby a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

I want to caveat that I don’t fully understand the trade model, but the level of clever data sleuthing in this paper is just amazing.

Here’s the headline finding:

Figures 3-6 show our point estimates and confidence regions for each lost city separately. Each figure depicts a map of the region containing the Anatolian cities where Assyrian merchants operate.

[…] For most cities, our estimates are very tight, in the sense that the confidence area is at most 100km wide, and often much smaller. […]

We add to those maps two other locations. The “F” sign corresponds to the site suggested by historian Massimo Forlanini (Forlanini, 2008); the “B” sign corresponds to the site suggested by historian Gojko Barjamovic (Barjamovic, 2011). This allows us to compare our estimates, obtained by a purely quantitative method—a structural gravity estimation, to those obtained by historians from a purely qualitative method—based on ancient itineraries, topographical studies, surviving toponyms, etc. We consider this comparison to be an informal external validity test.

In four cases, Durhumit, Ninassa, Sinahuttum, and Washaniya, our gravity estimates for the location of lost cities are extremely close to the conjecture of at least one of the two historians. In two of those cases, both historians either agree on the same site (Sinahuttum), or their conjectures are very close to each other (Ninassa). In the other two cases however (Durhumit and Washaniya), the historians strongly disagree; in those two cases, our gravity estimates are closer to the proposals made by Barjamovic than Forlanini. We view these cases where our structural gravity estimates agree with at least one historian’s proposal as an endorsement that the true location of those cities is indeed at or very near those sites. Again, as we do not use the historians conjectures as input in our estimation, those converging views are unlikely to be coincidental.

The paper has much, much more. The appendix goes into amazing detail on the clever data treatment methods:

To impose constraints on the location of lost cities, using information contained in multi-stop merchant itineraries, we proceed as follows.

First, we collect systematic information on multiple itineraries of either merchants or caravans in our corpus of texts. We keep only itineraries with at least one stop in a lost city. Second, we compute the following two statistics for all segments of those itineraries from one known city to another known city: the average travel length ||average segment||, and the standard deviation of the segment lengths ||s.d. segment||. Our measure of length is the optimal travel time between the two ends of each segment, defined in appendix B [I quote this below].

Third, we jointly impose on all lost cities the “short detour” and “pit stop” constraints defined in section 3.2, using all mentions of itineraries. In other words, for all lost cities jointly, we search for all grid points such that both constraints are satisfied.

Appendix B, on computing optimal travel time:

We collect elevation data on a large area around central Anatolia, in order to avoid a core-periphery bias –the tendency to have more road-crossings in locations in the center of the map. […]

Given this fine elevation grid, we compute travel times between any pixel and its eight neighbors (North-South, East-West, and diagonally). First, we compute the horizontal distance between any two contiguous pixels (in meters), and the signed elevation difference between them (in meters). We then apply the formula from Langmuir (1984) to translate distances and slope into travel times (in seconds). The parametrization of Langmuir’s formula we apply for overland travel is as follows. We assume it takes 0.72 seconds to travel 1 meter horizontally; it takes an additional 6 seconds for each vertical meter uphill; going downhill 1 vertical meter on a gentle slope (less than or equal to 21.25%) saves 2 seconds per vertical meter; going downhill on a steep slope (more than 21.25%) adds an additional 2 seconds per vertical meter.

For maritime travel, we assume traveling by boat is 10% faster than traveling over a featureless plain overland. However, in order to avoid many very short trips by sea, we assume it takes 1 hour to embark on a boat (from land to sea), and 1 hour to disembark (from sea to land). This assumption of a fixed cost of loading/unloading boats creates a natural tendency for sea ports to emerge where natural terrestrial routes (e.g. a valley) connect to the sea. Finally, we manually code major lakes, and three main rivers, the Euphrates, the Red and Green rivers in Turkey. For the three rivers, we collect information on impassable segments of the river (e.g. deep and steep canyon), as well as easy crossings/fords known to have been used in the Bronze Age, using data from Palmisano (2013). We impose a prohibitive penalty for crossing major lakes and those three rivers over segments where they are deemed impassable, and allow crossing as if it were on dry land for the river crossings.

The authors don’t seem to be using this notion of optimal travel route between the two ends of any segment when they estimate the trade model though, there, they use straight-line distance. I don’t understand why.


Written on November 30, 2017