Stesheni kumi na moja

I'm a bit late to the "social science bloggers love Station Eleven" party. Chris Blattman put it in his 2014 favorite novels list, and Jay Ulfelder shared a nice excerpt. I loved it too, so I'll try to add something new. Station Eleven is a novel about what happens after - and just before, and during - a flu pandemic wipes out 99% of the human population. The survivors refer to that event as the Collapse, and mostly avoid talking about or thinking about the immediate aftermath when all was a fight for survival. But Station Eleven is not just derivative post-apocalyptica. The book avoids a garish focus on the period just after the Collapse, but instead focuses on the more relatable period just as things are beginning to unravel, and much later, as bands of survivors who made it through the roughest bits are starting to rebuild. The main characters are a band of musicians and thespians who are trying to retain some of the cultural heritage and pass it on to the next generation, who have no memory of the world before the Collapse.

It's also a novel about loss, both personal and societal. One of my favorite passages:

...No more ball games played out under floodlights. No more porch lights with moths fluttering on summer nights. No more trains running under the surface of cities on the dazzling power of the electric third rail. No more cities.… No more Internet. No more social media, no more scrolling through the litanies of dreams and nervous hopes and photographs of lunches, cries for help and expressions of contentment and relationship-status updates with heart icons whole or broken, plans to meet up later, pleas, complaints, desires, pictures of babies dressed as bears or peppers for Halloween. No more reading and commenting on the lives of others, and in so doing, feeling slightly less alone in the room. No more avatars.

Since I was reading this novel while traveling for work in Tanzania and Zimbabwe and Liberia, I was struck by its focus on Canada and the US. Nothing wrong with this: the author is Canadian* and the presumed audience is probably North American. But I kept wondering what the Collapse would have been like elsewhere. It was global, but would it have been equally catastrophic elsewhere? Urban centers like Manhattan are ludicrously unworkable in the absence of the electricity and cars and subways and other bits of the massive, distributed, and - to casual eyes - largely invisible infrastructure working to constantly feed them with supplies and people and information.

The novel implies that these urban centers fared worse, and focuses on suburbia and rural areas, where survivors re-learn how to farm, how to make things for themselves. We see nothing of the global "periphery" where the fall from wealth might be less great, where the collective psychological trauma of losing 99 out of 100 people might dominate the loss of technology. Of course, the periphery is defined by the observer and the writer, and isn't the periphery at all to those who live in it. Maybe things would fare better, or maybe not.

Imagine the same novel, but set in Tanzania, or some other country where the majority of people are small-holder subsistence farmers. Maybe it would use the device of following two relatives, one living 'upcountry' or in 'the village' (i.e., poor rural parts) and the other living in Dar es Salaam. Relationships are established in an early chapter when the successful urban relative visits the village, or the rural relative visits the big city, and both marvel at their differences.

Then the flu hits, and things start to break down. Narrative chapters are intersperse with transcripts of SMS (text message) exchanges, demands for mPesa transfers, the realization that money doesn't matter anymore, and finally the realization that the networks aren't getting anything through anymore. Some city dwellers flee for the countryside but find themselves shunned as bearers of contagion. The urban protagonist makes her way, over the course of months or years, to the rural area where her relative once lived, hoping to find things are better there. Her belief that the village will be the same mirrors the readers' belief - and common trope in writing about developing countries - that subsistence farmers today somehow live just as they did centuries or millenia ago. Bullshit, of course.

As the urbanite nears the village, her encounters reveal all the ways the modern fabric of village life was related to society and technology and has likewise broken down with the Collapse. Perhaps the power vacuum set off struggles amongst survivors and led to some new social order, where none of her skills are that useful. Nearing the village, she finds that the rural relative is now leader, revealing his situation has been reversed by the Collapse just as the once successful urbanite finds her way into his village with her last shilling.

Maybe this novel already exists. Or something else using the post-apocalyptic form to explore somewhere that's not Canada or the US or Europe and not reliant on mechanized agriculture. Pointers, please, as I'd love to read it.

*originally I wrote the author was American. Oops. Apologies, Canada!

Data: big, small, and meta

When I read this New York Times piece back in August, I was in the midst of preparation and training for data collection at rural health facilities in Zambia. The Times piece profiles a group called Global Pulse that is doing good work on the 'big data' side of global health:

The efforts by Global Pulse and a growing collection of scientists at universities, companies and nonprofit groups have been given the label “Big Data for development.” It is a field of great opportunity and challenge. The goal, the scientists involved agree, is to bring real-time monitoring and prediction to development and aid programs. Projects and policies, they say, can move faster, adapt to changing circumstances and be more effective, helping to lift more communities out of poverty and even save lives.

Since I was gearing up for 'field work' (more on that here; I'll get to it soon), I was struck at the time by the very different challenges one faces at the other end of the spectrum. Call it small data? And I connected the Global Pulse profile with this, by Wayan Vota, from just a few days before:

The Sneakernet Reality of Big Data in Africa

When I hear people talking about “big data” in the developing world, I always picture the school administrator I met in Tanzania and the reality of sneakernet data transmissions processes.

The school level administrator has more data than he knows what to do with. Years and years of student grades recorded in notebooks – the hand-written on paper kind of notebooks. Each teacher records her student attendance and grades in one notebook, which the principal then records in his notebook. At the local district level, each principal’s notebook is recorded into a master dataset for that area, which is then aggregated at the regional, state, and national level in even more hand-written journals... Finally, it reaches the Minister of Education as a printed-out computer-generated report, complied by ministerial staff from those journals that finally make it to the ministry, and are not destroyed by water, rot, insects, or just plain misplacement or loss. Note that no where along the way is this data digitized and even at the ministerial level, the data isn’t necessarily deeply analyzed or shared widely....

And to be realistic, until countries invest in this basic, unsexy, and often ignored level of infrastructure, we’ll never have “big data” nor Open Data in Tanzania or anywhere else. (Read the rest here.)

Right on. And sure enough two weeks later I found myself elbow-deep in data that looked like this -- "Sneakernet" in action:

In many countries a quite a lot of data -- of varying quality -- exists, but it's often formatted like the above. Optimistically, it may get used for local decisions, and eventually for high-level policy decisions when it's months or years out of date. There's a lot of hard, good work being done to improve these systems (more often by residents of low-income countries, sometimes by foreigners), but still far too little. This data is certainly primary, in the sense that was collected on individuals, or by facilities, or about communities, but there are huge problems with quality, and with the sneakernet by which it gets back to policymakers, researchers, and (sometimes) citizens.

For the sake of quick reference, I keep a folder on my computer that has -- for each of the countries I work in -- most of the major recent ultimate sources of nationally-representative health data. All too often the only high-quality ultimate source is the most recent Demographic and Health Survey, surely one of the greatest public goods provided by the US government's aid agency. (I think I'm paraphrasing Angus Deaton here, but can't recall the source.) When I spent a summer doing epidemiology research with the New York City Department of Health and Mental Hygiene, I was struck by just how many rich data sources there were to draw on, at least compared to low-income countries. Very often there just isn't much primary data on which to build.

On the other end of the spectrum is what you might call the metadata of global health. When I think about the work the folks I know in global health -- classmates, professors, acquaintances, and occasionally thought not often me -- do day to day, much of it is generating metadata. This is research or analysis derived from the primary data, and thus relying on its quality. It's usually smart, almost always well-intentioned, and often well-packaged, but this towering edifice of effort is erected over a foundation of primary data; the metadata sometimes gives the appearance of being primary, when you dig down the sources often point back to those one or three ultimate data sources.

That's not to say that generating this metadata is bad: for instance, modeling impacts of policy decisions given the best available data is still the best way to sift through competing health policy priorities if you want to have the greatest impact. Or a more cynical take: the technocratic nature of global health decision-making requires that we either have this data or, in its absence, impute it. But regardless of the value of certain targeted bits of the metadata, there's the question of the overall balance of investment in primary vs. secondary-to-meta data, and my view -- somewhat ironically derived entirely from anecdotes -- is that we should be investing a lot more in the former.

One way to frame this trade-off is to ask, when considering a research project or academic institute or whatnot, whether the money spent on that project might result in more value for money if it was spent instead training data collectors and statistics offices, or supporting primary data collection (e.g., funding household surveys) in low-income countries. I think in many cases the answer will be clear, perhaps to everyone except those directly generating the metadata.

That does not mean that none of this metadata is worthwhile. On the contrary, some of it is absolutely essential. But a lot isn't, and there are opportunity costs to any investment, a choice between investing in data collection and statistics systems in low-income countries, vs. research projects where most of the money will ultimately stay in high-income countries, and the causal pathway to impact is much less direct.  

Looping back to the original link, one way to think of the 'big data' efforts like Global Pulse is that they're not metadata at all, but an attempt to find new sources of primary data. Because there are so few good sources of data that get funded, or that filter through the sneakernet, the hope is that mobile phone usage and search terms and whatnot can be mined to give us entirely new primary data, on which to build new pyramids of metadata, and with which to make policy decisions, skipping the sneakernet altogether. That would be pretty cool if it works out.

A more useful aid debate

Ken Opalo highlights recent entries on the great aid debate from Bill Gates, Jeff Sachs, Bill Easterly, and Chris Blattman. Much has been said on this debate, and sometimes it feels like it's hard to add anything new. But since having a monosyllabic first name seem sufficient qualification to weigh in, I will. First, this part of Ken's post resonates with me:

I think most reasonable people would agree that Sachs kind of oversold his big push idea in The End of Poverty. Or may be this was just a result of his attempt to shock the donor world into reaching the 0.7 percent mark in contributions. In any event it is unfortunate that the debate on the relative efficacy of aid left the pages of journal articles in its current form. It would have been more helpful if the debate spilled into the public in a policy-relevant form, with questions like: under what conditions does aid make a difference? What can we do to increase the efficacy of aid? What kinds of aid should we continue and what kinds should we abolish all together? (emphasis added)

Lee Crawfurd wrote something along these lines too: "Does Policy Work?"  Lee wrote that on Jan 10, 2013, and I jokingly said it was the best aid blog post of the year (so far). Now that 2013 has wrapped up, I'll extend that evaluation to 'best aid blog post of 2013'. It's worth sharing again:

The question "does policy work" is jarring, because we immediately realise that it makes little sense. Governments have about 20-30 different Ministries, which immediately implies at least 20-30 different areas of policy. Does which one work? We have health and education policy, infrastructure policy (roads, water, energy), trade policy, monetary policy, public financial management, employment policy, disaster response, financial sector policy, climate and environment policy, to name just a few. It makes very little sense to ask if they all collectively "work" or are "effective". Foreign aid is similar. Aid supports all of these different areas of policy....

A common concern is about the impact of aid on growth... Some aid is specifically targeted at growth - such as financing infrastructure or private sector development. But much of it is not. One of the few papers which looks at the macroeconomic impact of aid and actually bothers to disaggregate even a little the different types of aid, finds that the aid that could be considered to have growth as a target, does increase growth. It's the aid that was never intended to impact growth at all, such as humanitarian assistance, which doesn't have any impact on growth.

I like to think that most smart folks working on these issues -- and that includes both Sachs and Easterly -- would agree with the following summaries of our collective state of knowledge:

  •  A lot of aid projects don't work, and some of them do harm.
  • Some aid, especially certain types of health projects, works extremely well.

The disagreement is on the balance of good and bad, so I wish -- as Ken wrote -- the debate spilled into the public sphere along those lines (which is good? which is bad? how can we get a better mix?) rather than the blanket statements both sides are driven to by the very publicness of the debate. It reminds me a bit of debates in theology: if you put a fundamentalist and Einstein in the same room, they'll both be talking about "God" but meaning very different things with the same words. (This is not a direct analogy, so don't ask who is who...)

When Sachs and Easterly talk about whether aid "works", it would be nice if we could get everyone to first agree on a definition of "aid" and "works". But much of this seems to be driven by personal animosity between Easterly and Sachs, or more broadly, by personal animosity of a lot of aid experts vs. Sachs. Why's that? I think part of the answer is that it's hard to tell when Sachs is trying to be a scientist, and when he's trying to be an advocate. He benefits from being perceived as the former, but in reality is much more the latter. Nina Munk's The Idealist -- an excellent profile of Sachs I've been meaning to review -- explores this tension at some length. The more scientifically-minded get riled up by this confusion -- rightfully, I think. At the same time, public health folks tend to love Sachs precisely because he's been a powerful advocate for some types of health aid that demonstrably work -- also rightfully, I think. There's a tension there, and it's hard to completely dismiss one side as wrong, because the world is complicated and there are many overlapping debates and conversations; academic and lay, public and private, science and advocacy.

So, back to Ken's questions that would be answered by a more useful aid debate:

  • Under what conditions does aid make a difference?
  • What can we do to increase the efficacy of aid?
  • What kinds of aid should we continue and what kinds should we abolish all together?

Wouldn't it be amazing if the public debate were focused on these questions? Actually, something like that was done: Boston Review had a forum a while back on "Making Aid Work" with responses by Abhijit Banerjee, Angus Deaton, Howard White, Ruth Levine, and others. I think that series of questions is much more informative than another un-moderated round of Sachs vs Easterly.

Formalizing corruption: US medical system edition

Oh, corruption. It interferes with so many aspects of daily life, adding time to the simplest daily tasks, costing more money, and -- often the most frustrating aspect -- adding huge doses of uncertainty. That describes life in many low-income, high-corruption countries, leading to many a conversation with friends about comparisons with the United States and other wealthy countries. How did the US "solve" corruption? I've heard (and personally made) the argument that the US reduced corruption at least in part by formalizing it; by channeling the root of corruption, a sort of rent-seeking on a personal level, to rent-seeking on an institutional level. The US political and economic system has evolved such that some share of any wealth created is channeled into the pockets of a political and economic elite who benefit from the system and in turn reinforce it. That unproductively-channeled share of wealth is simultaneously a) probably smaller than the share of wealth lost to corruption in most developing countries, b) still large enough to head off -- along with the threat of more effective prosecution -- at least some more overt corruption, and c) still a major drain on society.

An example: Elisabeth Rosenthal profiles medical tourism in an impressive series in the New York Times. In part three of the series, an American named Michael Shopenn travels to Belgium to get a hip replacement. Why would he need to? Because health economics in the US is less a story of free markets and  more a story of political capture by medical interests, including technology and pharmaceutical companies, physicians' groups, and hospitals:

Generic or foreign-made joint implants have been kept out of the United States by trade policy, patents and an expensive Food and Drug Administration approval process that deters start-ups from entering the market. The “companies defend this turf ferociously,” said Dr. Peter M. Cram, a physician at the University of Iowa medical school who studies the costs of health care.

Though the five companies make similar models, each cultivates intense brand loyalty through financial ties to surgeons and the use of a different tool kit and operating system for the installation of its products; orthopedists typically stay with the system they learned on. The thousands of hospitals and clinics that purchase implants try to bargain for deep discounts from manufacturers, but they have limited leverage since each buys a relatively small quantity from any one company.

In addition, device makers typically require doctors’ groups and hospitals to sign nondisclosure agreements about prices, which means institutions do not know what their competitors are paying. This secrecy erodes bargaining power and has allowed a small industry of profit-taking middlemen to flourish: joint implant purchasing consultants, implant billing companies, joint brokers. There are as many as 13 layers of vendors between the physician and the patient for a hip replacement, according to Kate Willhite, a former executive director of the Manitowoc Surgery Center in Wisconsin.

If this system existed in another country we wouldn't hesitate to call it corrupt, and to note that it actively hurts consumers. It should be broken up by legislation for the public good, but instead it's protected by legislators who are lobbied by the industry and by doctors who receive kickbacks, implicit and explicit. Contrast that with the Belgian system:

His joint implant and surgery in Belgium were priced according to a different logic. Like many other countries, Belgium oversees major medical purchases, approving dozens of different types of implants from a selection of manufacturers, and determining the allowed wholesale price for each of them, for example. That price, which is published, currently averages about $3,000, depending on the model, and can be marked up by about $180 per implant. (The Belgian hospital paid about $4,000 for Mr. Shopenn’s high-end Zimmer implant at a time when American hospitals were paying an average of over $8,000 for the same model.)

“The manufacturers do not have the right to sell an implant at a higher rate,” said Philip Boussauw, director of human resources and administration at St. Rembert’s, the hospital where Mr. Shopenn had his surgery. Nonetheless, he said, there was “a lot of competition” among American joint manufacturers to work with Belgian hospitals. “I’m sure they are making money,” he added.

It's become a cliche to compare the US medical system to European ones, but those comparisons are made because it's hard to realize just how systematically corrupt -- and expensive, as a result -- the US system is without comparing it to ones that do a better job of channeling the natural profit-seeking goals of individuals and companies towards the public good. (For the history of how we got here, Paul Starr is a good place to start.)

The usual counterargument for protecting such large profit margins in the US is that they drive innovation, which is true but only to an extent. And for the implants industry that argument is much less compelling since many of the newer, "innovative" products have proved somewhere between no better and much worse in objective tests.

The Times piece is definitely worth a read. While I generally prefer the formalized corruption to the unformalized version, I'll probably share this article with friends -- in Nigeria, or Ethiopia, or wherever else the subject comes up next.

Advocates and scientists

A new book by The Idealist: Jeffrey Sachs and the Quest to End Poverty. The blurbs on Amazon are fascinating because they indicate that either the reviewers didn't actually read the book (which wouldn't be all that surprising) or that Munk's book paints a nuanced enough picture that readers can come away with very different views on what it actually proves. Here are two examples:

Amartya Sen: “Nina Munk’s book is an excellent – and moving – tribute to the vision and commitment of Jeffrey Sachs, as well as an enlightening account of how much can be achieved by reasoned determination.”

Robert Calderisi: "A powerful exposé of hubris run amok, drawing on touching accounts of real-life heroes fighting poverty on the front line."

The publisher's description seems to encompass both of those points of view: "The Idealist is the profound and moving story of what happens when the abstract theories of a brilliant, driven man meet the reality of human life." That sounds like a good read to me -- I look forward to reading when it comes out in September.

Munk's previous reporting strikes a similar tone. For example, here's an excerpt of her 2007 Vanity Fair profile of Sachs:

Leaving the region of Dertu, sitting in the back of an ancient Land Rover, I'm reminded of a meeting I had with Simon Bland, head of Britain's Department for International Development in Kenya. Referring to the Millennium Villages Project, and to Sachs in particular, Bland laid it out for me in plain terms: "I want to say, 'What concept are you trying to prove?' Because I know that if you spend enough money on each person in a village you will change their lives. If you put in enough resources—enough foreigners, technical assistance, and money—lives change. We know that. I've been doing it for years. I've lived and worked on and managed [development] projects.

"The problem is," he added, "when you walk away, what happens?"

Someone -- I think it was Chris Blattman, but I can't find the specific post -- wondered a while back whether too much attention has been given to the Millennium Villages Project. After all, the line of thinking goes, the MVP's have really just gotten more press and aren't that different from the many other projects with even less rigorous evaluation designs. That's certainly true: when journalists and aid bloggers debate the MVPs, part of what they're debating is Sachs himself because he's such a polarizing personality. If you really care about aid policy, and the uses of evidence in that policy, then that can all feel like an unhelpful distraction. Most aid efforts don't get book-length profiles, and the interest in Sachs' personality and persona will probably drive the interest in Munk's book.

But I also think the MVP debates have been healthy and interesting -- and ultimately deserving of most of the heat generated -- because they're about a central tension within aid and development, as well as other fields where research intersects with activism. If you think we already generally know what to do, then it makes sense to push forward with it at all costs. The naysayers who doubt you are unhelpful skeptics who are on some level ethically culpable for blocking good work. If you think the evidence is not yet in, then it makes more sense to function more like a scientist, collecting the evidence needed to make good decisions in the longer term. The naysayers opposing the scientists are then utopian advocates who throw millions at unproven projects. I've seen a similar tension within the field of public health, between those who see themselves primarily as advocates and those who see themselves as scientists, and I'm sure it exists elsewhere as well.

That is, of course, a caricature -- few people fall completely on one side of the advocates vs. scientists divide. But I think the caricature is a useful one for framing arguments. The fundamental disagreement is usually not about whether evidence should be used to inform efforts to end poverty or improve health or advance any other goal. Instead, the disagreement is often over what the current state of knowledge is. And on that note, if you harbor any doubts on where Sachs has positioned himself on that spectrum here's the beginning of Munk's 2007 profile:

In the respected opinion of Jeffrey David Sachs.... the problem of extreme poverty can be solved. In fact, the problem can be solved "easily." "We have enough on the planet to make sure, easily, that people aren't dying of their poverty. That's the basic truth," he tells me firmly, without a doubt.

...To Sachs, the end of poverty justifies the means. By hook or by crook, relentlessly, he has done more than anyone else to move the issue of global poverty into the mainstream—to force the developed world to consider his utopian thesis: with enough focus, enough determination, and, especially, enough money, extreme poverty can finally be eradicated.

Once, when I asked what kept him going at this frenzied pace, he snapped back, "If you haven't noticed, people are dying. It's an emergency."

----

via Gabriel Demombynes.

If you're new to the Millennium Villages debate, here's some background reading: a recent piece in Foreign Policy by Paul Starobin, and some good posts by Chris Blattman (one, two, three), this gem from Owen Barder, and Michael Clemens.

The greatest country in the world

I've been in Ethiopia for six and a half months, and in that time span I have twice found myself explaining the United States' gun culture, lack of reasonable gun control laws, and gun-related political sensitivities to my colleagues and friends in the wake of a horrific mass shooting. When bad things happen in the US -- especially if they're related to some of our national moral failings that grate on me the most, e.g. guns, health care, and militarism -- I feel a sense of personal moral culpability, much stronger when I'm living in the US. I think having to explain how terrible and terribly preventable things could happen in my society, while living somewhere else, makes me feel this way. (This is by no means because people make me feel this way; folks often go out of their way to reassure me that they don't see me as synonymous with such things.)

I think that this enhanced feeling of responsibility is actually a good thing. Why? If being abroad sometimes puts the absurdity of situations at home into starker relief, maybe it will reinforce a drive to change. All Americans should feel some level of culpability for mass shootings, because we have collectively allowed a political system driven by gun fanatics,  a media culture unintentionally but consistently glorifying mass murderers, and a horribly deficient mental health system to persist, when their persistence has such appalling consequences.

After the Colorado movie theater shooting I told colleagues here that nothing much would happen, and sadly I was right. This time I said that maybe -- just maybe -- the combination of the timing (immediately post-election) and the fact that the victims were schoolchildren will result in somewhat tighter gun laws. But, attention spans are short so action would need to be taken soon. Hopefully the fact that the WhiteHouse.gov petition on gun control already has 138,000 signatures (making it the most popular petition in the history of the website) indicates that something could well be driven through. Even if that's the case, anything that could be passed now will be just the start and it will be long hard slog to see systematic changes.

As Andrew Gelman notes here, we are all part of the problem to some extent: "It’s a bit sobering, when lamenting problems with the media, to realize that we are the media too." He's talking about bloggers, but I think it extends further: every one of us that talks about gun control in the wake of a mass shooting but quickly lets it slip down our conversational and political priorities once the event fades from memory is part of the problem. I'm making a note to myself to write further about gun control and the epidemiology of violence in the future -- not just today -- because I think that entrenched problems require a conscious choice to break the cycle. In the meantime, Harvard School of Public Health provides some good places to start.

On deworming

GiveWell's Alexander Berger just posted a more in-depth blog review of the (hugely impactful) Miguel and Kremer deworming study. Here's some background: the Cochrane reviewGivewell's first response to it, and IPA's very critical response. I've been meaning to blog on this since the new Cochrane review came out, but haven't had time to do the subject justice by really digging into all the papers. So I hope you'll forgive me for just sharing the comment I left at the latest GiveWell post, as it's basically what I was going to blog anyway:

Thanks for this interesting review — I especially appreciate that the authors [Miguel and Kremer] shared the material necessary for you [GiveWell] to examine their results in more depth, and that you talk through your thought process.

However, one thing you highlighted in your post on the new Cochrane review that isn’t mentioned here, and which I thought was much more important than the doubts about this Miguel and Kremer study, was that there have been so many other studies that did not find large effect on health outcomes! I’ve been meaning to write a long blog post about this when I really have time to dig into the references, but since I’m mid-thesis I’ll disclaim that this quick comment is based on recollection of the Cochrane review and your and IPA’s previous blog posts, so forgive me if I misremember something.

The Miguel and Kremer study gets a lot of attention in part because it had big effects, and in part because it measured outcomes that many (most?) other deworming studies hadn’t measured — but it’s not as if we believe these outcomes to be completely unrelated. This is a case where what we believe the underlying causal mechanism for the social effects to be is hugely important. For the epidemiologists reading, imagine this as a DAG (a directed acyclic graph) where the mechanism is “deworming -> better health -> better school attendance and cognitive function -> long-term social/economic outcomes.” That’s at least how I assume the mechanism is hypothesized.

So while the other studies don’t measure the social outcomes, it’s harder for me to imagine how deworming could have a very large effect on school and social/economic outcomes without first having an effect on (some) health outcomes — since the social outcomes are ‘downstream’ from the health ones. Maybe different people are assuming that something else is going on — that the health and social outcomes are somehow independent, or that you just can’t measure the health outcomes as easily as the social ones, which seems backwards to me. (To me this was the missing gap in the IPA blog response to GiveWell’s criticism as well.)

So continuing to give so much attention to this study, even if it’s critical, misses what I took to be the biggest takeaway from that review — there have been a bunch of studies that showed only small effects or none at all. They were looking at health outcomes, yes, but those aren’t unrelated to the long-term development, social, and economic effects. You [GiveWell] try to get at the external validity of this study by looking for different size effects in areas with different prevalence, which is good but limited. Ultimately, if you consider all of the studies that looked at various outcomes, I think the most plausible explanation for how you could get huge (social) effects in the Miguel Kremer study while seeing little to no (health) effects in the others is not that the other studies just didn’t measure the social effects, but that the Miguel Kremer study’s external validity is questionable because of its unique study population.

(Emphasis added throughout)

 

Someone should study this: Addis housing edition

Attention development economists and any other researchers who have an interest in urban or housing policy in low-income countries: My office in Addis has about 25 folks working in it, and we have a daily lunch pool where we pay in 400 birr a month (about 22 USD) to cover costs and all get to eat Ethiopian food for lunch every day. It's been a great way to get to know my coworkers -- my work is often more solitary: editing, writing, and analyzing data -- and an even better way to learn about a whole variety of issues in Ethiopia.

addis construction

The conversation is typically in Amharic and mine is quite limited, so I'm lucky if I can figure out the topic being discussed.  [I usually know if they're talking about work because so many NGO-speak words aren't translated, for example: "amharic amharic amharic Health Systems Strengthening amharic amharic..."] But folks will of course translate things as needed.  One observation is that certain topics affect their daily lives a lot, and thus come up over and over again at lunch.

One subject that has come up repeatedly is housing. Middle class folks in Addis Ababa feel the housing shortage very acutely. Based on our conversations it seems the major limitation is in getting credit to buy or build a house.

The biggest source of good housing so far has been government-constructed condominiums, for which you pay a certain (I'm not sure how much) percentage down and then make payments over the years. (The government will soon launch a new "40/60 scheme" to which many folks are looking forward, in which anyone who can make a 40% down payment on a house will get a government mortgage for the remaining 60%.)

When my coworkers first mentioned that the government will offer the next round of condominiums by a public lottery, my thought was "that will solve someone's identification problem!" A large number of people -- many thousands -- have registered for the government lottery. I believe you have to meet a certain wealth or income threshold (i.e., be able to make the down payment), but after that condo eligibility will be determined randomly. I think that -- especially if someone organizes the study prior to the lottery -- this could yield very useful results on the impact of urban housing policy.

How (and how much) do individuals and families benefit from access to better housing? Are there changes in earnings, savings, investments? Health outcomes? Children's health and educational outcomes? How does it affect political attitudes or other life choices? It could also be an opportunity to study migration between different neighborhoods, amongst many other things.

A Google Scholar search for Ethiopia housing lottery turns up several mentions, but (in my very quick read) no evaluations taking advantage of the randomization. (I can't access this recent article in an engineering journal, but from the abstract assume that it's talking about a different kind of evaluation.) So, someone have at it? It's just not that often that large public policy schemes are randomized.

"As it had to fail"

My favorite line from the Anti-Politics Machine is a throwaway. The author, James Ferguson, an anthropologist, describes a World Bank agricultural development program in Lesotho, and also -- through that lens -- ends up describing development programs more generally. At one point he notes that the program failed "as it had to fail" -- not really due to bad intentions, or to lack of technical expertise, or lack of funds -- but because failure was written into the program from the beginning. Depressing? Yes, but valuable. I read in part because Chris Blattman keeps plugging it, and then shortly before leaving for Ethiopia I saw that a friend had a copy I could borrow. Somehow it didn't make it onto reading lists for any of my classes for either of my degrees, though it should be required for pretty much anyone wanting to work in another culture (or, for that matter, trying to foment change in your own). Here's Blattman's description:

People’s main assets [in Lesotho] — cattle — were dying in downturns for lack of a market to sell them on. Households on hard times couldn’t turn their cattle into cash for school fees and food. Unfortunately, the cure turned out to be worse than the disease.

It turns out that cattle were attractive investments precisely because they were hard to liquidate. With most men working away from home in South Africa, buying cattle was the best way to keep the family saving rather than spending. They were a means for men to wield power over their families from afar.

Ferguson’s point was that development organizations attempt to be apolitical at their own risk. What’s more, he argued that they are structured to remain ignorant of the historical, political and cultural context in which they operate.

And here's a brief note from Foreign Affairs:

 The book comes to two main conclusions. First is that the distinctive discourse and conceptual apparatus of development experts, although good for keeping development agencies in business, screen out and ignore most of the political and historical facts that actually explain Third World poverty-since these realities suggest that little can be accomplished by apolitical "development" interventions. Second, although enormous schemes like Thaba-Tseka generally fail to achieve their planned goals, they do have the major unplanned effect of strengthening and expanding the power of politically self-serving state bureaucracies. Particularly good is the discussion of the "bovine mystique," in which the author contrasts development experts' misinterpretation of "traditional" attitudes toward uneconomic livestock with the complex calculus of gender, cash and power in the rural Lesotho family.

The reality was that Lesotho was not really an idyllically-rural-but-poor agricultural economy, but rather a labor reserve more or less set up by and controlled by apartheid South Africa. The gulf between the actual political situation and the situation as envisioned by the World Bank -- where the main problems were lack of markets and technical solutions -- at the time was enormous. This lets Ferguson have a lot of fun showing the absurdities of Bank reports from the era, and once you realize what's going on it's quite frustrating to read how the programs turned out, and to wonder how no one saw it coming.

This contrast between rhetoric and reality is the book's greatest strength: because the situation is absurd, it illustrates Ferguson's points very well, that aid is inherently political, and that projects that ignore that reality have their future failure baked in from the start. But that contrast is a weakness too, as because the situation is extreme you're left wondering just how representative the case of Lesotho really was (or is). The 1970s-80s era World Bank certainly makes a great buffoon (if not quite a villain) in the story, and one wonders if things aren't at least a bit better today.

Either way, this is one of the best books on development I've read, as I find myself mentally referring to it on a regular basis. Is the rhetoric I'm reading (or writing) really how it is? Is that technical, apolitical sounding intervention really going to work? It's made me think more critically about the role outside groups -- even seemingly benevolent, apolitical ones -- have on local politics. On the other hand, the Anti-Politics Machine does read a bit like it was adapted from an anthropology dissertation (it was); I wish it could get a new edition with more editing to make it more presentable. And a less ugly cover. But that's no excuse -- if you want to work in development or international health or any related field, it should be high on your reading list.

Why we should lie about the weather (and maybe more)

Nate Silver (who else?) has written a great piece on weather prediction -- "The Weatherman is Not a Moron" (NYT) -- that covers both the proliferation of data in weather forecasting, and why the quantity of data alone isn't enough. What intrigued me though was a section at the end about how to communicate the inevitable uncertainty in forecasts:

...Unfortunately, this cautious message can be undercut by private-sector forecasters. Catering to the demands of viewers can mean intentionally running the risk of making forecasts less accurate. For many years, the Weather Channel avoided forecasting an exact 50 percent chance of rain, which might seem wishy-washy to consumers. Instead, it rounded up to 60 or down to 40. In what may be the worst-kept secret in the business, numerous commercial weather forecasts are also biased toward forecasting more precipitation than will actually occur. (In the business, this is known as the wet bias.) For years, when the Weather Channel said there was a 20 percent chance of rain, it actually rained only about 5 percent of the time.

People don’t mind when a forecaster predicts rain and it turns out to be a nice day. But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic. “If the forecast was objective, if it has zero bias in precipitation,” Bruce Rose, a former vice president for the Weather Channel, said, “we’d probably be in trouble.”

My thought when reading this was that there are actually two different reasons why you might want to systematically adjust reported percentages ((ie, fib a bit) when trying to communicate the likelihood of bad weather.

But first, an aside on what public health folks typically talk about when they talk about communicating uncertainty: I've heard a lot (in classes, in blogs, and in Bad Science, for example) about reporting absolute risks rather than relative risks, and about avoiding other ways of communicating risks that generally mislead. What people don't usually discuss is whether the point estimates themselves should ever be adjusted; rather, we concentrate on how to best communicate whatever the actual values are.

Now, back to weather. The first reason you might want to adjust the reported probability of rain is that people are rain averse: they care more strongly about getting rained on when it wasn't predicted than vice versa. It may be perfectly reasonable for people to feel this way, and so why not cater to their desires? This is the reason described in the excerpt from Silver's article above.

Another way to describe this bias is that most people would prefer to minimize Type II Error (false negatives) at the expense of having more Type I error (false positives), at least when it comes to rain. Obviously you could take this too far -- reporting rain every single day would completely eliminate Type II error, but it would also make forecasts worthless. Likewise, with big events like hurricanes the costs of Type I errors (wholesale evacuations, cancelled conventions, etc) become much greater, so this adjustment would be more problematic as the cost of false positives increases. But generally speaking, the so-called "wet bias" of adjusting all rain prediction probabilities upwards might be a good way to increase the general satisfaction of a rain-averse general public.

The second reason one might want to adjust the reported probability of rain -- or some other event -- is that people are generally bad at understanding probabilities. Luckily though, people tend to be bad about estimating probabilities in surprisingly systematic ways! Kahneman's excellent (if too long) book Thinking, Fast and Slow covers this at length. The best summary of these biases that I could find through a quick Google search was from Lee Merkhofer Consulting:

 Studies show that people make systematic errors when estimating how likely uncertain events are. As shown in [the graph below], likely outcomes (above 40%) are typically estimated to be less probable than they really are. And, outcomes that are quite unlikely are typically estimated to be more probable than they are. Furthermore, people often behave as if extremely unlikely, but still possible outcomes have no chance whatsoever of occurring.

The graph from that link is a helpful if somewhat stylized visualization of the same biases:

In other words, people think that likely events (in the 30-99% range) are less likely to occur than they are in reality, that unlike events (in the 1-30% range) are more likely to occur than they are in reality, and extremely unlikely events (very close to 0%) won't happen at all.

My recollection is that these biases can be a bit different depending on whether the predicted event is bad (getting hit by lightning) or good (winning the lottery), and that the familiarity of the event also plays a role. Regardless, with something like weather, where most events are within the realm of lived experience and most of the probabilities lie within a reasonable range, the average bias could probably be measured pretty reliably.

So what do we do with this knowledge? Think about it this way: we want to increase the accuracy of communication, but there are two different points in the communications process where you can measure accuracy. You can care about how accurately the information is communicated from the source, or how well the information is received. If you care about the latter, and you know that people have systematic and thus predictable biases in perceiving the probability that something will happen, why not adjust the numbers you communicate so that the message -- as received by the audience -- is accurate?

Now, some made up numbers: Let's say the real chance of rain is 60%, as predicted by the best computer models. You might adjust that up to 70% if that's the reported risk that makes people perceive a 60% objective probability (again, see the graph above). You might then adjust that percentage up to 80% to account for rain aversion/wet bias.

Here I think it's important to distinguish between technical and popular communication channels: if you're sharing raw data about the weather or talking to a group of meteorologists or epidemiologists then you might take one approach, whereas another approach makes sense for communicating with a lay public. For folks who just tune in to the evening news to get tomorrow's weather forecast, you want the message they receive to be as close to reality as possible. If you insist on reporting the 'real' numbers, you actually draw your audience further from understanding reality than if you fudged them a bit.

The major and obvious downside to this approach is that people know this is happening, it won't work, or they'll be mad that you lied -- even though you were only lying to better communicate the truth! One possible way of getting around this is to describe the numbers as something other than percentages; using some made-up index that sounds enough like it to convince the layperson, while also being open to detailed examination by those who are interested.

For instance, we all the heat index and wind chill aren't the same as temperature, but rather represent just how hot or cold the weather actually feels. Likewise, we could report some like "Rain Risk" or "Rain Risk Index" that accounts for known biases in risk perception and rain aversion. The weather man would report a Rain Risk of 80%, while the actual probability of rain is just 60%. This would give us more useful information for the recipients, while also maintaining technical honesty and some level of transparency.

I care a lot more about health than about the weather, but I think predicting rain is a useful device for talking about the same issues of probability perception in health for several reasons. First off, the probabilities in rain forecasting are much more within the realm of human experience than the rare probabilities that come up so often in epidemiology. Secondly, the ethical stakes feel a bit lower when writing about lying about the weather rather than, say, suggesting physicians should systematically mislead their patients, even if the crucial and ultimate aim of the adjustment is to better inform them.

I'm not saying we should walk back all the progress we've made in terms of letting patients and physicians make decisions together, rather than the latter withholding information and paternalistically making decisions for patients based on the physician's preferences rather than the patient's. (That would be silly in part because physicians share their patients' biases.) The idea here is to come up with better measures of uncertainty -- call it adjusted risk or risk indexes or weighted probabilities or whatever -- that help us bypass humans' systematic flaws in understanding uncertainty.

In short: maybe we should lie to better tell the truth. But be honest about it.

A misuse of life expectancy

Jared Diamond is going back and forth with Acemoglu and Robinson over his review of their new book, Why Nations Fail. The exchange is interesting in and of itself, but I wanted to highlight one passage from Diamond's response:

The first point of their four-point letter is that tropical medicine and agricultural science aren’t major factors shaping national differences in prosperity. But the reasons why those are indeed major factors are obvious and well known. Tropical diseases cause a skilled worker, who completes professional training by age thirty, to look forward to, on the average, just ten years of economic productivity in Zambia before dying at an average life span of around forty, but to be economically productive for thirty-five years until retiring at age sixty-five in the US, Europe, and Japan (average life span around eighty). Even while they are still alive, workers in the tropics are often sick and unable to work. Women in the tropics face big obstacles in entering the workforce, because of having to care for their sick babies, or being pregnant with or nursing babies to replace previous babies likely to die or already dead. That’s why economists other than Acemoglu and Robinson do find a significant effect of geographic factors on prosperity today, after properly controlling for the effect of institutions.

I've added the bolding to highlight an interpretation of what life expectancy means that is wrong, but all too common.

It's analagous to something you may have heard about ancient Rome: since life expectancy was somewhere in the 30s, the Romans who lived to be 40 or 50 or 60 were incredibly rare and extraordinary. The problem is that life expectancy -- by which we typically mean life expectancy at birth -- is heavily skewed by infant mortality, or deaths under one year of age. Once you get to age five you're generally out of the woods -- compared to the super-high mortality rates common for infants (less than one year old) and children (less than five years old). While it's true that there were fewer old folks in ancient Roman society, or -- to use Diamond's example -- modern Zambian society, the difference isn't nearly as pronounced as you might think given the differences in life expectancy.

Does this matter? And if so, why? One area where it's clearly important is Diamond's usage in the passage above: examining the impact of changes in life expectancy on economic productivity. Despite the life expectancy at birth of 38 years, a Zambian male who reaches the age of thirty does not just have eight years of life expectancy left -- it's actually 23 years!

Here it's helpful to look at life tables, which show mortality and life expectancy at different intervals throughout the lifespan. This WHO paper by Alan Lopez et al. (PDF) examining mortality between 1990-9 in 191 countries provides some nice data: page 253 is a life table for Zambia in 1999. We see that males have a life expectancy at birth of just 38.01 years, versus 38.96 for females (this was one of the lowest in the world at that time). If you look at that single number you might conclude, like Diamond, that a 30-year old worker only has ~10 years of life left. But the life expectancy for those males remaining alive at age 30 (64.2% of the original birth cohort remains alive at this age) is actually 22.65 years. Similarly, the 18% of Zambians who reach age 65, retirement age in the US, can expect to live an additional 11.8 years, despite already having lived 27 years past the life expectancy at birth.

These numbers are still, of course, dreadful -- there's room for decreasing mortality at all stages of the lifespan. Diamond's correct in the sense that low life expectancy results in a much smaller economically active population. But he's incorrect when he estimates much more drastic reductions in the economically productive years that workers can expect once they reach their economically productive 20s, 30s, and 40s.

----

[Some notes: 1. The figures might be different if you limit it to "skilled workers" who aren't fully trained until age 30, as Diamond does; 2. I'm also assumed that Diamond is working from general life expectancy, which was similar to 40 years total, rather than a particular study that showed 10 years of life expectancy at age 30 for some subset of skilled workers, possibly due to high HIV prevalence -- that seems possible but unlikely; 3. In these Zambia estimates, about 10% of males die before reaching one year of age, or over 17% before reaching five years of age. By contrast, between the ages of 15-20 only 0.6% of surviving males die, and you don't see mortality rates higher than the under-5 ones until above age 85!; and 4. Zambia is an unusual case because much of the poor life expectancy there is due to very high HIV/AIDS prevalence and mortality -- which actually does affect adult mortality rates and not just infant and child mortality rates. Despite this caveat, it's still true that Diamond's interpretation is off. ]

The great quant race

My Monday link round-up included this Big Think piece asking eight young economists about the future of their field. But, I wanted to highlight the response from Justin Wolfers:

Economics is in the midst of a massive and radical change.  It used to be that we had little data, and no computing power, so the role of economic theory was to “fill in” for where facts were missing.  Today, every interaction we have in our lives leaves behind a trail of data.  Whatever question you are interested in answering, the data to analyze it exists on someone’s hard drive, somewhere.  This background informs how I think about the future of economics.

Specifically, the tools of economics will continue to evolve and become more empirical.  Economic theory will become a tool we use to structure our investigation of the data.  Equally, economics is not the only social science engaged in this race: our friends in political science and sociology use similar tools; computer scientists are grappling with “big data” and machine learning; and statisticians are developing new tools.  Whichever field adapts best will win.  I think it will be economics.  And so economists will continue to broaden the substantive areas we study.  Since Gary Becker, we have been comfortable looking beyond the purely pecuniary domain, and I expect this trend towards cross-disciplinary work to continue.

I think it's broadly true that economics will become more empirical, and that this is a good thing, but I'm not convinced economics will "win" the race. This tracks somewhat with the thoughts from Marc Bellemare that I've linked to before: his post on "Methodological convergence in the social sciences" is about the rise of mathematical formalism in social sciences other than economics. This complements the rise of empirical methods, in the sense that while they are different developments, both are only possible because of the increasing mathematical, statistical, and coding competency of researchers in many fields. And I think the language of convergence is more likely to represent what will happen (and what is already happening), rather than the language of a "race."

We've already seen an increase in RCTs (developed in medicine and epidemiology) in economics and political science, and the decades ahead will (hopefully) see more routine serious analysis of observational data in epidemiology and other fields (in the sense that the analysis is more careful about causal inference), and  advanced statistical techniques and machine learning methodologies will become commonplace across all fields as researchers deal with massive, complex longitudinal datasets gleaned not just from surveys but increasingly from everyday collection.

Economists have a head start in that their starting pool of talent is generally more mathematically competent than other social sciences' incoming PhD classes. But, switching back to the "race" terminology, economics will only "win" if -- as Wolfers speculates will happen -- it can leverage theory as a tool for structuring investigation. My rough impression is that economic theory does play this role, sometimes, but it has also held empirical investigation in economics back at times, perhaps through publication bias (see on minimum wage) against empirical results that don't fit the theory, and possibly more broadly through a general closure of routes of investigation that would not occur to someone already trained in economic theory.

Regardless, I get the impression that if you want to be a cutting-edge researcher in any social science you should be beefing up not only your mathematical and statistical training, but also your coding practice.

Update: Stevenson and Wolfers expand their thoughts in this excellent Bloomberg piece. And more at Freakonomics here.

Aid, paternalism, and skepticism

Bill Easterly, the ex-blogger who just can't stop, writes about a conversation he had with GiveWell, a charity reviewer/giving guide that relies heavily on rigorous evidence to pick programs to invest in. I've been meaning to write about GiveWell's approach -- which I generally think is excellent. Easterly, of course, is an aid skeptic in general and a critic of planned, technocratic solutions in particular. Here's an excerpt from his notes on his conversation with GiveWell:

...a lot of things that people think will benefit poor people (such as improved cookstoves to reduce indoor smoke, deworming drugs, bed nets and water purification tablets) {are things} that poor people are unwilling to buy for even a few pennies. The philanthropy community’s answer to this is “we have to give them away for free because otherwise the take-up rates will drop.” The philosophy behind this is that poor people are irrational. That could be the right answer, but I think that we should do more research on the topic. Another explanation is that the people do know what they’re doing and that they rationally do not want what aid givers are offering. This is a message that people in the aid world are not getting.

Later, in the full transcript, he adds this:

We should try harder to figure out why people don’t buy health goods, instead of jumping to the conclusion that they are irrational.

Also:

It's easy to catch people doing irrational things. But it's remarkable how fast and unconsciously people get things right, solving really complex problems at lightning speed.

I'm with Easterly, up to a point: aid and development institutions need much better feedback loops, but are unlikely to develop them for reasons rooted in their nature and funding. The examples of bad aid he cites are often horrendous. But I think this critique is limited, especially on health, where the RCTs and all other sorts of evidence really do show that we can have massive impact -- reducing suffering and death on an epic scale -- with known interventions. [Also, a caution: the notes above are just notes and may have been worded differently if they were a polished, final product -- but I think they're still revealing.]

Elsewhere Easterly has been more positive about the likelihood of benefits from health aid/programs in particular, so I find it quite curious that his examples above of things that poor people don't always price rationally are all health-related. Instead, in the excerpts above he falls back on that great foundational argument of economists: if people are rational, why have all this top-down institutional interference? Well, I couldn't help contrasting that argument with this quote highlighted by another economist, Tyler Cowen, at Marginal Revolution:

Just half of those given a prescription to prevent heart disease actually adhere to refilling their medications, researchers find in the Journal of American Medicine. That lack of compliance, they estimate, results in 113,00 deaths annually.

Let that sink in for a moment. Residents of a wealthy country, the United States, do something very, very stupid. All of the RCTs show that taking these medicines will make them live longer, but people fail to overcome the barriers at hand to take something that is proven to make them live longer. As a consequence they die by the hundreds of thousands every single year. Humans may make remarkably fast unconscious decisions correctly in some spheres, sure, but it's hard to look at this result and see any way in which it makes much sense.

Now think about inserting Easterly's argument against paternalism (he doesn't specifically call it that here, but has done so elsewhere) in philanthropy here: if people in the US really want to live, why don't they take these medicines? Who are we to say they're irrational? That's one answer, but maybe we don't understand their preferences and should avoid top-down solutions until we have more research.

reductio ad absurdum? Maybe. On the one hand, we do need more research on many things, including medication up-take in high- and low-income countries. On the other hand, aid skepticism that goes far enough to be against proven health interventions just because people don't always value those interventions rationally seems to line up a good deal with the sort of anti-paternalism-above-all streak in conservatism that opposes government intervention in pretty much every area. Maybe it's a good policy to try out some nudge-y (libertarian paternalism, if you will) policies to encourage people to take their medicine, or require people to have health insurance they would not choose to buy on their own.

Do you want to live longer? I bet you do, and it's safe to assume that people in low-income countries do as well. Do you always do exactly what will help you do so? Of course not: observe the obesity pandemic. Do poor people really want to suffer from worms or have their children die from diarrhea? Again, of course not. While poor people in low-income countries aren't always willing to invest a lot of time or pay a lot of money for things that would clearly help them stay alive for longer, that shouldn't be surprising to us. Why? Because the exact same thing is true of rich people in wealthy countries.

People everywhere -- rich and poor -- make dumb decisions all the time, often because those decisions are easier in the moment due to our many irrational cognitive and behavioral tics. Those seemingly dumb decisions usually reveal the non-optimal decision-making environments in which we live, but you still think we could overcome those things to choose interventions that are very clearly beneficial. But we don't always. The result is that sometimes people in low-income countries might not pay out of pocket for deworming medicine or bednets, and sometimes people in high-income countries don't take their medicine -- these are different sides of the same coin.

Now, to a more general discussion of aid skepticism: I agree with Easterly (in the same post) that aid skeptics are a "feature of the system" that ultimately make it more robust. But it's an iterative process that is often frustrating in the moment for those who are implementing or advocating for specific programs (in my case, health) because we see the skeptics as going too far. I'm probably one of the more skeptical implementers out there -- I think the majority of aid programs probably do more harm than good, and chose to work in health in part because I think that is less true in this sector than in others. I like to think that I apply just the right dose of skepticism to aid skepticism itself, wringing out a bit of cynicism to leave the practical core.

I also think that there are clear wins, supported by the evidence, especially in health, and thus that Easterly goes too far here. Why does he? Because his aid skepticism isn't simply pragmatic, but also rooted in an ideological opposition to all top-down programs. That's a nice way to put it, one that I think he might even agree with. But ultimately that leads to a place where you end up lumping things together that are not the same, and I'll argue that that does some harm. Here are two examples of aid, both more or less from Easterly's post:

  • Giving away medicines or bednets free, because otherwise people don't choose to invest in them; and,
  • A World Bank project in Uganda that "ended up burning down farmers’ homes and crops and driving the farmers off the land."

These are a both, in one sense, paternalistic, top-down programs, because they are based on the assumption that sometimes people don't choose to do what is best for themselves. But are they the same otherwise? I'd argue no. One might argue that they come from the same place, and an institution that funds the first will inevitably mess up and do the latter -- but I don't buy that strong form of aid skepticism. And being able to lump the apparently good program and the obviously bad together is what makes Easterly's rhetorical stance powerful.

If you so desire, you could label these two approaches as weak coercion and strong coercion. They are both coercive in the sense that they reshape the situations in which people live to help achieve an outcome that someone -- a planner, if you will -- has decided is better. All philanthropy and much public policy is coercive in this sense, and those who are ideologically opposed to it have a hard time seeing the difference. But to many of us, it's really only the latter, obvious harm that we dislike, whereas free medicines don't seem all that bad. I think that's why aid skeptics like Easterly group these two together, because they know we'll be repulsed by the strong form. But when they argue that all these policies are ultimately the same because they ignore people's preferences (as demonstrated by their willingness to pay for health goods, for example), the argument doesn't sit right with a broader audience. And then ultimately it gets ignored, because these things only really look the same if you look at them through certain ideological lenses.

That's why I wish Easterly would take a more pragmatic approach to aid skepticism; such a form might harp on the truly coercive aspects without lumping them in with the mildly paternalistic. Condemning the truly bad things is very necessary, and folks "on the inside' of the aid-industrial complex aren't generally well-positioned to make those arguments publicly. However, I think people sometimes need a bit of the latter policies, the mildly paternalistic ones like giving away medicines and nudging people's behavior -- in high- and low-income countries alike. Why? Because we're generally the same everywhere, doing what's easiest in a given situation rather than what we might choose were the circumstances different. Having skeptics on the outside where they can rail against wrongs is incredibly important, but they must also be careful to yell at the right things lest they be ignored altogether by those who don't share their ideological priors.

Mimicking success

If you don't know what works, there can be an understandable temptation to try to create a picture that more closely resembles things that work. In some of his presentations on the dire state of student learning around the world, Lant Pritchett invokes the zoological concept of isomorphic mimicry: the adoption of the camouflage of organizational forms that are successful elsewhere to hide their actual dysfunction. (Think, for example, of a harmless snake that has the same size and coloring as a very venomous snake -- potential predators might not be able to tell the difference, and so they assume both have the same deadly qualities.) For our illustrative purposes here, this could mean in practice that some leaders believe that, since good schools in advanced countries have lots of computers, it will follow that, if computers are put into poor schools, they will look more like the good schools. The hope is that, in the process, the poor schools will somehow (magically?) become good, or at least better than they previously were. Such inclinations can nicely complement the "edifice complex" of certain political leaders who wish to leave a lasting, tangible, physical legacy of their benevolent rule. Where this once meant a gleaming monument soaring towards the heavens, in the 21st century this can mean rows of shiny new computers in shiny new computer classrooms.

That's from this EduTech post by Michael Trucano. It's about the recent evaluations showing no impact from the One Laptop per Child (OLPC) program, but I think the broader idea can be applied to health programs as well. For a moment let's apply it to interventions designed to prevent maternal mortality. Maternal mortality is notoriously hard to measure because it is -- in the statistical sense -- quite rare. While many 'rates' (which are often not actual rates, but that's another story) in public health are expressed with denominators of 1,000 (live births, for example), maternal mortality uses a denominator of 100,000 to make the numerators a similar order of magnitude.

That means that you can rarely measure maternal mortality directly -- even with huge sample sizes you get massive confidence intervals that make it difficult to say whether things are getting worse, staying the same, or improving. Instead we typically measure indirect things, like the coverage of interventions that have been shown (in more rigorous studies) to reduce maternal morbidity or mortality. And sometimes we measure health systems things that have been shown to affect coverage of interventions... and so forth. The worry is that at some point you're measuring the sort of things that can be improved -- at least superficially -- without having any real impact.

All that to say: 1) it's important to measure the right thing, 2) determining what that 'right thing' is will always be difficult, and 3) it's good to step back every now and then and think about whether the thing you're funding or promoting or evaluating is really the thing you care about or if you're just measuring "organizational forms" that camouflage the thing you care about.

(Recent blog coverage of the OLPC evaluations here and here.)

Addis taxi economics

A is in his early 20s, and he's my go-to taxi driver. He speaks good conversational English, which he picked up in part through being befriended by a Canadian couple who lived in Ethiopia for a while. Addis traffic is crazy but a bit more forgiving than some cities I've seen -- there don't seem to be many real traffic rules, but there's more deference to other drivers. "A, you drive like a pro," my friend says. "How long have you been driving?" "Oh, just six months!" (We gulp.) In Addis "taxi" is used to refer to both ancient minibuses that drive set routes throughout the city and to traditional blue-and-white cars -- often ancient-er -- that will take you wherever you want to go. (Google Images of Addis taxis here.) A's car is the latter type, an old model that breaks down often and has one window handle you have to pass around to roll down each window.

Minibuses charge a flat rate on pre-specified routes, usually just a few Birr (ie, less than $0.20 US), but the personal taxis can charge much more. So having a few reliable drivers' cell numbers is helpful because the prospect of your continued business helps ensure that you'll get a better price for each ride.

Regarding taxis more generally: always negotiate a fare before you get in. Depending on the mood of the driver, current traffic and road construction, and the evident wealth, race, or nationality of the prospective passenger, the prices quoted will vary widely. I was once quoted 60 Birr and 150 Birr as starting prices ($3.50 and $8.80 US) by two drivers standing right next to each other!

Almost all of the taxi business seems to come from internationals and upper-class Ethiopians. Thus, taxis often congregate around the neighborhoods, hotels, and restaurants frequented by these groups. You'll also get quoted a higher starting price if you're seen coming out of a nice hotel than if you pick a cab just around the corner.

Starting prices definitely differ by race as well. (Here I cite conversations with Chinese-American and Bengali-American friends living in Addis.) Drivers will generally assume you're from America (if you're Caucasian), China (if you're East Asian), and India (if you're South Asian) and charge accordingly. White people get the highest starting prices, whereas if they assume you're Chinese or Indian the starting price will be about 70% of the white price. This is, of course, entirely anecdotal, so econ PhD students take note: there's some fascinating research to be done on differential pricing of initial and final fares for internationals living in Addis. In economics this differential pricing is called price discrimination (which can actually be good for consumers as it allows producers to provide services to a broader range of people, who often have different preferences and ability to pay).

A doesn't own his taxi, and says that most drivers don't either. Instead, he rents/leases his from a man who owns many taxis. That guy made enough money ("he is rich now!") that he now goes to Dubai to buy other cars to import into Ethiopia. (Dubai is the go-to place for importing many things here.) A pays the owner a flat rate to have the taxi for a 10-day period, with more or less automatic renewals as long as he's doing well enough to keep paying the fee. If he gets sick or wants to take a day off he has to pay that day's rental fee out of earnings from another day, so A gets up at 6 am and drives until after midnight. Seven days a week.

A is only six months into the job, but he's already looking for the next gig. He aspires to work as a tour guide -- better pay and better hours, he says. And, I think, less risk of injury: almost all the taxis in Addis are from an era before airbags and seatbelts became commonplace. I think A would be a great tour guide -- I hope it works out.

Ethiopia bleg

Bleg: n. An entry in a blog requesting information or contributions. (via Wiktionary)

Finals are over, and I just have a few things to finish up before moving to Addis Ababa, Ethiopia on June 1. I'll be there for almost eight months, working as a monitoring and evaluation intern on a large health project; this work will fulfill internship requirements for my MPA and MSPH degrees, and then I'll have just one semester left at Princeton before graduating. After two years of "book-learning" I'm quite excited to apply what I've been learning a bit.

One thing I learned from doing (too many?) short stints abroad is that it's easy to show up with good intentions and get in the way; I'm hopeful that eight months is long enough that I can be a net benefit to the team I'll be working with, rather than a drain as I get up to speed. I plan to get an Amharic tutor after I arrive -- unfortunately I figured out my internship recently enough that I wasn't able to plan ahead and study the language before going.

I'm especially excited to live in Ethiopia. I have not been before -- this will be my first visit to East Africa / the Horn of Africa at all. I'll mostly be in Addis, but should also spend some time in rural areas where the project is being implemented. I've already talked with several friends who briefly lived in Addis to get tips on what to read, what to do, who to meet, and what to pack. That said I'm always open for more suggestions.

So, I'll share what I've already, or definitely plan to read, and let you help fill in the gaps. Do you have book recommendations? Web or blog links? RSS suggestions? What-to-eat (or not eat) tips? Here's what I've dug up so far:

  • Owen Barder has several informative pages on living and working in Ethiopia here.
  • Chris Blattman's post on What to Read About Ethiopia has lots of tips, some of which I draw on below. His advice for working in a developing country is also helpful, along with lists of what to pack (parts one and two), though they're obviously not tailored to life in Addis. Blattman also links to Stefan Dercon's page with extensive readings on Ethiopian agriculture, and helpfully organizes relevant posts under tags, including posts tagged Ethiopia.
  • As for a general history, I've started Harold Marcus' academic History of Ethiopia, and it's good so far.
  • Books that have gotten multiple recommendations from friends -- and thus got bumped to the top of my list -- include The EmperorCutting for StoneChains of Heaven, and The Sign and the Seal. Other books I've seen mentioned here and there include Sweetness in the BellyWaugh in AbyssiniaNotes from the Hyena's BellyScoop, and A Year in the Death of Africa. If you rave about one of these enough it might move higher up the priority list. But I'm sure there are others worth reading too.
  • For regular information flow I have a Google Alert for Ethiopia, the RSS feed for AllAfrica.com's Ethiopia page, and two blogs found so far:  Addis Journal and Expat in Addis. (Blog recommendations welcome, especially more by Ethiopians.) There's also a Google group called Addis Diplo List.
  • One of my favorite novels is The Beautiful Things That Heaven Bears -- the story of an Ethiopian immigrant in Washington, DC's Logan Circle neighborhood in the 1980s. It's as much about gentrification as it is about the immigrant experience, and I first read it as a new arrival in DC's Petworth neighborhood -- which is in some ways at a similar 'stage' of gentrification to Logan Circle in the 80s.
  • I've started How to Work in Someone Else's Country, which is aimed more at short-term consultants but has been helpful so far.
  • Also not specific to Ethiopia, but I'm finally getting around to reading the much-recommended Anti-Politics Machine, on the development industry in Lesotho, and it seems relevant.

Let me know what I've missed in the comments. And happy 200th blog post to me.

(Note: links to books are Amazon Affiliates links, which means I get a tiny cut of the sales value if you buy something after clicking a link.)

Stats lingo in econometrics and epidemiology

Last week I came across an article I wish I'd found a year or two ago: "Glossary for econometrics and epidemiology" (PDF from JSTOR, ungated version here) by Gunasekara, Carter, and Blakely. Statistics is to some extent a common language for the social sciences, but there are also big variations in language that can cause problems when students and scholars try to read literature from outside their fields. I first learned epidemiology and biostatistics at a school of public health, and now this year I'm taking econometrics from an economist, as well as other classes that draw heavily on the economics literature.

Friends in my economics-centered program have asked me "what's biostatistics?" Likewise, public health friends have asked "what's econometrics?" (or just commented that it's a silly name). In reality both fields use many of the same techniques with different language and emphases. The Gunasekara, Carter, and Blakely glossary linked above covers the following terms, amongst others:

  • confounding
  • endogeneity and endogenous variables
  • exogenous variables
  • simultaneity, social drift, social selection, and reverse causality
  • instrumental variables
  • intermediate or mediating variables
  • multicollinearity
  • omitted variable bias
  • unobserved heterogeneity

If you've only studied econometrics or biostatistics, chances are at least some of these terms will be new to you, even though most have roughly equivalent forms in the other field.

Outside of differing language, another difference is in the frequency with which techniques are used. For instance, instrumental variables seem (to me) to be under-used in public health / epidemiology applications. I took four terms of biostatistics at Johns Hopkins and don't recall instrumental variables being mentioned even once! On the other hand, economists just recently discovered randomized trials. (Now they're more widely used) .

But even within a given statistical technique there are important differences. You might think that all social scientists doing, say, multiple linear regression to analyze observational data or critiquing the results of randomized controlled trials would use the same language. In my experience they not only use different vocabulary for the same things, they also emphasize different things. About a third to half of my epidemiology coursework involved establishing causal models (often with directed acyclic graphs)  in order to understand which confounding variables to control for in a regression, whereas in econometrics we (very!) briefly discussed how to decide which covariates might cause omitted variable bias. These discussions were basically about the same thing, but they differed in terms of language and in terms of emphasis.

I think an understanding of how and why researchers from different fields talk about things differently helps you to understand the sociology and motivations of each field.  This is all related to what Marc Bellemare calls the ongoing "methodological convergence in the social sciences." As research becomes more interdisciplinary -- and as any applications of research are much more likely to require interdisciplinary knowledge -- understanding how researchers trained in different academic schools think and talk will become increasingly important.

Facebook's brilliantly self-interested organ donation move

How can social media have a big impact on public health? Here's one example: Facebook just introduced a feature that allows users to announce their status as organ donors, and to tell the story of when they decided to sign up as a donor. They're -- rightly, I think -- getting tons of good press from it. Here's NPR for example:

Starting today, the social media giant is letting you add your organ-donation status to your timeline. And, if you'd like to become an organ donor, Facebook will direct you to a registry.

Patients and transplant surgeons are eager for you to try it out.

Nearly 114,000 people in this country are waiting for organs, according to the United Network for Organ Sharing. But there simply aren't enough organs to go around.

It's an awesome idea. Far too few Americans are organ donors, so anything that boosts sign-up rates is welcome. As Ezra Klein notes, organ donation rates would be much higher if we simply had people opt out of donating, rather than opt in, but that's another story. (And another aside: I hope they alerted some smart people beforehand to help them rigorously measure the impact of this shift!)

Call me a cynic, but I think the story of why Facebook chose to do this -- and in the way they did it -- is more interesting.Yes, there's altruism, but Facebook is a business above all. Maybe they're just trying to cultivate that Google ethos of "we sometimes spend lots of money on far-sighted things just to make the world a better place." Facebook will certainly garner lots of public good will from this.

But I think, even more importantly, Facebook gets magnificent cover for introducing new modules on health/wellness. Check out the screenshot from their newsroom post on the new features:

That's right -- in the new Health & Wellness section you can enter not only whether you're an organ donor, but also these categories: "Overcame an Illness," "Quit a Habit," "New Eating Habits," "Weight Loss," "Glasses, Contacts, Others," and "Broken Bone."

All life events some people may want to share, of course. But Facebook makes money off of advertising, and just think of how much money Americans spend on weight loss, or on trying to quit smoking (or more usually, continuing it), or on glasses and contacts. Then think how much more advertisers will pay to show ads to segments of the billions of Facebook users who have shared the fact that they're actively trying to lose weight.

Maybe Facebook has seen this sort of health data as a major growth area for some time, but was wary of introducing such features in the wrong way. On any other news day the introduction of these features would have triggered a new outbreak of the "Facebook feature prompt privacy outcry" and "Why does Facebook need your health data?" stories. Sure, we'll get some of those this time, but I think any backlash will pale in comparison to the initial PR bump.

I don't think there's necessarily anything wrong with the move, and I certainly welcome any boost in organ donor registration. It may just be that this is a case where Facebook's business interests in inducing us to share more of our personal information with them just happens to happily coincide with a badly needed public good. Either way, the execution is brilliant, because so far I've mostly seen news stories talking about how great organ donation is. And I just updated my Facebook status.

Obesity in the US

One of my classmates whose primary interest is not health policy posted this graph on Facebook, saying "This is stunning... so much so in fact that I'm a bit skeptical of its accuracy." The graph compares obesity rates by state in 1994 vs. 2008, and unfortunately it is both terrifying and accurate. (I can't find the original source of this particular infographic, but the data is the same as on this CDC page.)

I think those of who study or work in public health have seen variations on these graphs so many times that they've lost some of their shock value. But this truly is an incredible shift in population health in a frighteningly short period of time. In 1994 every state had an adult population that was less than 20% obese, and many were less than 15% obese. A mere 14 years later, Colorado is the only state under 20%, and quite a few have rates over 30% -- these were completely unheard of before.

I did a quick literature search, trying to understand what causal factors might be responsible for such a rapid shift. It's a huge and challenging question, so maybe it should be unsurprising that I didn't find an article that really stood out as the best. Still, here are three articles that I found helpful:

1. Specifically looking at childhood obesity in the US (which is different from the rates highlighted in the map above, but related): "Childhood Obesity: Trends and Potential Causes" by Anderson and Butcher (JStor PDF, ungated PDF). Their intro:

The increase in childhood obesity over the past several decades, together with the associated health problems and costs, is raising grave concern among health care professionals, policy experts, children's advocates, and parents. Patricia Anderson and Kristin Butcher document trends in children's obesity and examine the possible underlying causes of the obesity epidemic.

They begin by reviewing research on energy intake, energy expenditure, and "energy balance," noting that children who eat more "empty calories" and expend fewer calories through physical activity are more likely to be obese than other children. Next they ask what has changed in children's environment over the past three decades to upset this energy balance equation. In particular, they examine changes in the food market, in the built environment, in schools and child care settings, and in the role of parents-paying attention to the timing of these changes.

Among the changes that affect children'se nergy intake are the increasing availability of energy dense, high-calorie foods and drinkst hroughs chools. Changes in the family, particularly increasing dual-career or single-parent working families, may also have increased demand for food away from home or pre-prepared foods. A host of factors have also contributed to reductions in energy expenditure. In particular, children today seem less likely to walk to school and to be traveling more in cars than they were during the early 1970s, perhaps because of changes in the built environment. Finally, children spend more time viewing television and using computers.

Anderson and Butcher find no one factor that has led to increases in children's obesity. Rather, many complementary changes have simultaneously increased children's energy intake and decreased their energy expenditure. The challenge in formulating policies to address children's obesity is to learn how best to change the environment that affects children's energy balance.

2. On global trends: "The global obesity pandemic: shaped by global drivers and local environments" by Swinburn et al. (Here's the PDF from Science Direct and an ungated PDF for those not at universities.) Summary:

The simultaneous increases in obesity in almost all countries seem to be driven mainly by changes in the global food system, which is producing more processed, affordable, and effectively marketed food than ever before. This passive overconsumption of energy leading to obesity is a predictable outcome of market economies predicated on consumption-based growth. The global food system drivers interact with local environmental factors to create a wide variation in obesity prevalence between populations.

Within populations, the interactions between environmental and individual factors, including genetic makeup, explain variability in body size between individuals. However, even with this individual variation, the epidemic has predictable patterns in subpopulations. In low-income countries, obesity mostly affects middle-aged adults (especially women) from wealthy, urban environments; whereas in high-income countries it affects both sexes and all ages, but is disproportionately greater in disadvantaged groups.

Unlike other major causes of preventable death and disability, such as tobacco use, injuries, and infectious diseases, there are no exemplar populations in which the obesity epidemic has been reversed by public health measures. This absence increases the urgency for evidence-creating policy action, with a priority on reduction of the supply-side drivers.

3. Finally, on methodological differences and where the trends are heading: "Obesity Prevalence in the United States — Up, Down, or Sideways?" (NEJM, ungated PDF). Evidently there's some debate over whether rates are going up or have stabilized in the last few years, because different data sources say different things. Generally the NHANES data (in which people are actually measured, rather than reporting their height and weight) is the best available (and that's what the maps above are made from). An excerpt:

One key reason for discrepancies among the estimates is a simple difference in data-collection methods. The most frequently quoted data sources are the NHANES studies of adults and children, the BRFSS for adults, and the CDC's Youth Risk Behavior Survey (YRBS)4 for high- school students. Although sampling strategies, response rates, age discrepancies, and the wording of survey questions may account for some variability, a major factor is that in calculating the BMI, the BRFSS and YRBS rely on respondents' self-reported heights and weights, whereas the NHANES collects measured (i.e., actual) heights and weights each year, albeit from a considerably smaller sample of the population. Since people often claim to be taller than they are and to weigh less than they actually do, we should not be surprised that obesity prevalence figures based on self-reported heights and weights are considerably lower than those based on measured data.

I would greatly appreciate any suggestions for what to read in the comments, especially links to work that tries to rigorously assess (rather than just hypothesize on) the relative import of various drivers of the increase in adult obesity.

On food deserts

Gina Kolata, writing for the New York Times, has sparked some debate with this article: "Studies Question the Pairing of Food Deserts and Obesity". In general I often wish that science reporting focused more on how the new studies fit in with the old, rather than just the (exciting) new ones. On first reading I noticed that one study is described as having explored the association of "the type of food within a mile and a half of their homes" with what people eat. This raised a little question mark in my mind, as I know that prior studies have often looked at distances much shorter than 1.5 miles, but it was mostly a vague hesitation. And if you didn't know that before reading the article, then you've missed a major difference between the old and new results (and one that could have been easily explained). Also, describing something as "an article of faith" when it's arguably something more like "the broad conclusion draw from most most prior research"... that certainly established an editorial tone from the beginning.

Intrigued, I sent the piece to a friend (and former public health classmate) who has work on food deserts, to get a more informed reaction. I'm sharing her thoughts here (with permission) because this is an area of research that I don't follow as closely, and her reactions helped me to situate this story in the broader literature:

1. This quote from the article is so good!

"It is always easy to advocate for more grocery stores,” said Kelly D. Brownell, director of Yale University’s Rudd Center for Food Policy and Obesity, who was not involved in the studies. “But if you are looking for what you hope will change obesity, healthy food access is probably just wishful thinking.”

The "unhealthy food environment" has a much bigger impact on diet than the "healthy food environment", but it's politically more viable to work from an advocacy standpoint than a regulatory standpoint. (On that point, you still have to worry about what food is available - you can't just take out small businesses in impoverished neighborhoods and not replace it with anything.)

2. The article is too eager to dismiss the health-food access relationship. There's good research out there, but there's constant difficulty with tightening methods/definitions and deciding what to control for. The thing that I think is really powerful about the "food desert" discourse is that it opens doors to talk about race, poverty, community, culture, and more. At the end of the day, grocery stores are good for low-income areas because they bring in money and raise property values. If the literature isn't perfect on health effects, I'm still willing to advocate for them.

3. I want to know more about the geography of the study that found that low-income areas had more grocery stores than high-income areas. Were they a mix of urban, peri-urban, and rural areas? Because that's a whole other bear. (Non-shocker shocker: rural areas have food deserts... rural poverty is still a problem!)

4. The article does a good job of pointing to how difficult it is to study this. Hopkins (and the Baltimore Food Czar) are doing some work with healthy food access scores for neighborhoods. This would take into account how many healthy food options there are (supermarkets, farmers' markets, arabers, tiendas) and how many unhealthy food options there are (fast food, carry out, corner stores).

5. The studies they cite are with kids, but the relationship between food insecurity (which is different, but related to food access) and obesity is only well-established among women. (This, itself, is not talked about enough.) The thinking is that kids are often "shielded" from the effects of food insecurity by their mothers, who eat a yo-yo diet depending on the amount of food in the house.

My friend also suggested the following articles for additional reading: