"The only thing worse than being a poverty reporter is if no one ever wrote about it at all."

From a Guernica interview with Katherine Boo, author of Behind The Beautiful Forevers, on life in a Mumbai slum:

Guernica: Was it important to you to stay in the vicinity of the community?

Katherine Boo: Quite the contrary. It was important to me, in the course of my reporting in Annawadi, day after day, night after night, to leave and get a sense of the city as a whole. It is a city that until eleven years ago was unknown to me, and is changing all of time, so I really had to explore it, learn about it. I certainly did a lot reporting around the five-star hotels as well as Annawadi. I did my whole anthropology of five-star bathrooms, each one more lavish than the next. (Laughs.)

Even if I were to stay in Annawadi or something like it, it wouldn’t be the same. After Hurricane Katrina, for instance, I did stay in the shelter [when] I did reporting for The New Yorker. But me staying in a shelter is not the same as someone who’s been evacuated to that shelter. This whole thing of, “I’m walking a mile in their shoes by living this certain way.” Well, I’m not living that way. I can turn around and leave. We can do the best we can to get to the core of people’s circumstances, but it’s ludicrous to think that my being in Annawadi all of that time is walking in their shoes. It’s not.

The quote in the title of this post is from a section on the feelings of guilt that haunt Boo when she thinks about how her work exploits people, especially poor people. The interview's a great read. (Found via LongForm.org, a great source for creative nonfiction / narrative journalism.)

"We are nowhere"

What happens when you don't have a country? Here's the India/Bangladesh answer to that question, from the NYT a few weeks ago:

Mr. Ali, however, exists in a no man’s land. The patch of earth here on which he lives and farms is part of an archipelago of villages, known as enclaves, that are technically Bangladeshi territory but sit entirely surrounded by India, stuck on the wrong side of the border.

“The Indians say we are not Indian; the Bangladeshis say we are not Bangladeshi,” Mr. Ali said. “We are nowhere.”

There are 50 other Bangladeshi enclaves like Mr. Ali’s inside India; there are 111 Indian enclaves inside Bangladesh. The people of the enclaves are orphans, citizens of no country.

Future poverty

I'm not usually a fan of institutional blogs. When a big NGO creates a blog it's often for solely promotional purposes, and much of what I find interesting is criticism. Also, blogs are often written by younger, lower-level staff who don't necessarily have the same freedom to innovate and must have their posts approved by higher-ups. One of the few blogs associated with an NGO that does make it into my Google Reader is From Poverty to Power by Duncan Green at Oxfam. This post at the end of July caught my eye: "By 2015 Nigeria will have more poor people than India or China."

This post highlights two ideas that I've come across again and again in the last year, which make me most optimistic and hesitant about the near future:

  1. A much, much smaller percentage of the world lives in extreme poverty today than 30-40 years ago.
  2. Most of that decline has been driven by reductions in India and especially in China. Thus, as those nations continue to see reductions and many countries in Africa lag behind, the largest countries in Africa with the youngest populations (ie, Nigeria) will soon outpace India and China in terms of absolute numbers living in the worst poverty. While some African countries -- I'm thinking of Nigeria and South Africa in particular -- have considerable resources to devote to poverty alleviation, when they choose to, those resources pale in comparison to those available to say, the Chinese state.

The commenters on the original post also highlight some important methodological limitations in the Brookings study that Green cited. Read it all here.

Avoid immunization, go to jail. Eek.

Via Foreign Policy:

In Nigeria, avoiding a shot could mean going to jail

As Bill Gates unveiled his plan this week to rid the world of polio, health officials in the northern Nigerian state of Kano announced their own assault on the disease. "The government will henceforth arrest and prosecute any parent that refuses to allow health workers to vaccinate his child against child-killer diseases, particularly polio," said a health ministry official.

This news, which was announced at the outset of the government's four-day vaccination campaign targeting six million children, marks a shift in government policy toward immunization programs in the north of the country. Nigeria's polio vaccination program stalled for more than a year after Muslim leaders raised doubts over the inoculations' safety in the summer of 2003 -- resulting in bans issued by some northern state governments....

I'm not familiar with every vaccination law in the world, but this seems like a first to me. If not a first, at least an exception to the norm. I don't like this more coercive approach. If you have enough resistance to a policy that you feel you need to threaten jail time, then actually making that threat -- and following through on it -- seems likely to breed more resistance.

I think governments can and should both incentivize vaccination and make it difficult to avoid without a really good reason. Any government policy should make it easier to get vaccinated against childhood diseases than avoid vaccination, because having a fully-vaccinated population is a classic public good. I like the fact that most states in the US have opt-out provisions for religious objections to vaccination, but I also think that states should not design a policy such that getting that exemption is simpler -- in terms of time and money -- than getting a child vaccinated, as is the case in many states.

But threatening to throw parents in jail? Way too heavy-handed to me, and too likely to backfire.

"The Most Important Medical Discovery of the 20th Century"

Just a reminder -- it wasn't open heart surgery or sequencing the human genome:

A massive cholera outbreak in refugee camps on the border of India and Bangladesh in the 1970s exposed the limitations of intravenous treatment and paved the way for a radically different approach to treating dehydration.

In 1971, the war for independence in what is now Bangladesh prompted 10 million refugees to %ee to the border of West Bengal, India. !e unsanitary conditions in the overcrowded refugee camps fueled a deadly cholera outbreak characterized by fatality rates approaching 30 percent.' Health officials from the Indian and West Bengal governments and relief agencies faced a daunting task: Conditions were squalid and chaotic, intravenous fluid was in scarce supply, treatment facilities and transportation were inadequate, and trained personnel were limited.' Mass treatment with intravenous therapy alone would not halt the impending crisis.

Dr. Dilip Mahalanabis, a cholera expert at the Johns Hopkins Centre for Medical Research and Training in Calcutta and head of a health center at one of the refugee camps, proposed an alternative to the intravenous treatment. He suggested the camp use a new method of oral replacement of fluid, known as oral rehydration therapy, that had been developed in the 1960s in Bangladesh and Calcutta.

The science was as ingenious as it was simple: A solution of water, salt, and sugar was found to be as effective in halting dehydration as intravenous therapy. Dr. Mahalanabis’ team recognized the many advantages of oral therapy over the intravenous rehydration: It is immensely cheaper, at just a few cents per dose; safer and easier to administer; and more practical for mass treatment. ORT, however, had still not been tested in an uncontrolled setting, and skeptical health specialists cautioned that only health professionals and doctors should administer the new therapy.)

Mahalanabis’ team moved quickly to introduce the treatment to the 350,000 residents of the camp. Packets of table salt, baking soda, and glucose were prepared in Calcutta at the diminutive cost of one penny per liter of fluid.' The solution was widely distributed, with instructions about how to dissolve it in water. Despite the shortage of trained health personnel, large numbers of patients were treated, with mothers, friends, and patients themselves administering the solution.

The results were extraordinary: At the height of the outbreak, cholera fatalities in the camp using ORT dropped to less than 4 percent, compared with 20 percent to 30 percent in camps treated with intravenous therapy.

From Millions Saved, case study 8: diarrhea in Egypt. Just re-reading it for a class.

How much can farming improve people's health?

The Economist opines on agriculture and micronutrient deficiencies:

Farming ought to be especially good for nutrition. If farmers provide a varied diet to local markets, people seem more likely to eat well. Agricultural growth is one of the best ways to generate income for the poorest, who need the most help buying nutritious food. And in many countries women do most of the farm work. They also have most influence on children’s health. Profitable farming, women’s income and child nutrition should therefore go together. In theory a rise in farm output should boost nutrition by more than a comparable rise in general economic well-being, measured by GDP.

In practice it is another story. A paper* written for the Delhi meeting shows that an increase in agricultural value-added per worker from $200 to $500 a year is associated with a fall in the share of the undernourished population from about 35% to just over 20%. That is not bad. But it is no better than what happens when GDP per head grows by the same amount. So agriculture seems no better at cutting malnutrition than growth in general.

Another paper† confirms this. Agricultural growth reduces the proportion of underweight children, whereas non-agricultural growth does not. But when it comes to stunting (children who do not grow as tall as they should), it is the other way round: GDP growth produces the benefit; agriculture does not. As a way to cut malnutrition, farming seems nothing special.

Why not? Partly because many people in poor countries buy, not grow, their food—especially the higher-value, more nutritious kinds, such as meat and vegetables. So extra income is what counts. Agriculture helps, but not, it seems, by enough.

How to talk about countries

Brendan Rigby, writing at WhyDev.org, has these useful tips for how to talk about countries and poverty and whatnot while avoiding terms like "Western" and "developing":

  • Qualify what you mean
  • Avoid generalisations althogther (highly recommended)
  • Use more discrete and established categories, such as Least Developed Countries (LDCs), or Low Income & Middle Income Countries, which have set criteria
  • Reference legitimate and recognised benchmarks such as the UNDP’s Human Development Index or the World Bank’s poverty benchmark (These have there own methodology problems)
  • Examine development issues and challenges of individual communities, countries in the context of regional geography, history and relations rather than losing countries within references to regions and continents. There is a big different between ‘poverty in Africa’ and ‘poverty in Angola’ or ‘poverty in South Africa’.

Good rules to follow. I'm generally OK with using "low and middle income countries," except that I'm not sure "income" should be the standard by which everything is defined. I wish there were a benchmark that took into account human development, but was uncontroversial (ha!) and thus accepted by all, and then we could easily classify nations (and these naming conventions are, after all, useful shorthands) by that index without worrying about accuracy or offense. Until we get to that point, I think using clearly defined measures of income and qualifying what we mean is the best way forward when generalizing -- when that's necessary or helpful at all. Which is at least sometimes, and maybe often.

"Small Changes, Big Results"

The Boston Review has a whole new set of articles on the movement of development economics towards randomized trials. The main article is Small Changes, Big Results: Behavioral Economics at Work in Poor Countries and the companion and criticism articles are here. They're all worth reading, of course. I found them through Chris Blattman's new post "Behavioral Economics and Randomized Trials: Trumpeted, Attacked, and Parried." I want to re-state a point I made in the comments there, because I think it's worth re-wording to get it right. It's this: I often see the new randomized trials in economics compared to clinical trials in the medical literature. There are many parallels to be sure, but the medical literature is huge, and there's really one subset of it that offers better parallels.

Within global health research there are a slew of large (and not so large), randomized (and other rigorous designs), controlled (placebo or not) trials that are done in "field" or "community" settings. The distinction is that clinical trials usually draw their study populations from a hospital or other clinical setting and their results are thus only generalizable to the broader population (external validity) to the extent that the clinical population is representative of the whole population; while community trials are designed to draw from everyone in a given community.

Because these trials draw their subjects from whole communities -- and they're often cluster-randomized so that whole villages or clinic catchment areas are the unit that's randomized, rather than individuals -- they are typically larger, more expensive, more complicated and pose distinctive analytical and ethical problems. There's also often room for nesting smaller studies within the big trials, because the big trials are already recruiting large numbers of people meeting certain criteria and there are always other questions that can be answered using a subset of that same population. [All this is fresh on my mind since I just finished a class called "Design and Conduct of Community Trials," which is taught by several Hopkins faculty who run very large field trials in Nepal, India, and Bangladesh.]

Blattman is right to argue for registration of experimental trials in economics research, as is done with medical studies. (For nerdy kicks, you can browse registered trials at ISRCTN.) But many of the problems he quotes Eran Bendavid describing in economics trials--"Our interventions and populations vary with every trial, often in obscure and undocumented ways"--can also be true of community trials in health.

Likewise, these trials -- which often take years and hundreds of thousands of dollars to run -- often yield a lot of knowledge about the process of how things are done. Essential elements include doing good preliminary studies (such as validating your instruments), having continuous qualitative feedback on how the study is going, and gathering extra data on "process" questions so you'll know why something worked or not, and not just whether it did (a lot of this is addressed in Blattman's "Impact Evaluation 2.0" talk). I think the best parallels for what that research should look like in practice will be found in the big community trials of health interventions in the developing world, rather than in clinical trials in US and European hospitals.

On community health workers

Sometimes I start writing a post and it ends up somewhere completely different than I had originally imagined it. My last post, on why there might be less good global health blogging out there than you'd expect, was actually originally going to be a simple link and quote from what I think is a very good post. A global health blogger named Emma notes some recent coverage of community health worker programs in the NYTimes (Villages Without Doctors). Then Emma writes:

There’s nothing more valuable than a good community health worker. [...Some reasons they're good....] When this happens, it’s a beautiful model.

When it doesn’t—and it doesn’t far more often than anyone would like to admit—community health workers are at best a drain on expenses with little to show for it and at worst a THREAT to community health instead of an asset.  They can lure organizations and communities into complacency and miss opportunities for training higher level health care workers, breed antibiotic resistance strains of diseases by misuse of antibiotics, or give a false sense of security to people who actually need higher levels of care, among other things.  If you think about CHWs usually are—rural, uneducated and as often as not illiterate or semi-literate people pulled from their communities and given tremendous responsibility with short training courses—this isn’t terribly surprising.

Emma also highlights a companion NYT piece called What Makes Community Health Care Work?

The article talks about really important things—make the program sustainable enough so that it can last after the donor leaves!  Teach the CHWs to teach so even if the CHW doesn’t last some of their lessons will! Provide support for newly trained CHWs so they don’t feel stranded and alone!  Expand in ways that make sense for the specific setting and situation!  Get the country’s government on board!  But…

There’s always a but.  These things are HARD.  Really hard.  Of COURSE we want to do supportive supervision for the CHW, to watch how they practice and build their skills one-on-one based on each CHWs specific strengths and weaknesses. Of COURSE we want to design a program that can last long after we don’t have money from a donor anymore (emergency grants are usually 1-2 years at most).  Of COURSE we want the CHWs to teach their communities how live healthier lives.  But supportive supervision involves enough organization employees to conduct regular visits to remote and widely dispersed sites, and a security situation that allows these workers to safely go out into communities, and enough vehicles to get out to remote sites (and donors are often reluctant to fund vehicles and the fuel and insurance they take).

Read the rest here.

History refresh: AZT and ethics

A professor pointed me to this online history and ethics lesson from the Harvard Kennedy School's Program on Ethical Issues in International Research: The Debate Over Clinical Trials of AZT to Prevent Mother-to-Infant Transmission of HIV in Developing Nations. It's surprisingly readable, and the issues debated are surprisingly current.

In 1994, researchers in the US and France announced stunning news of a rare victory in the battle against the AIDS pandemic. Studies conducted in both countries had shown conclusively that a regimen of the drug AZT, administered prenatally to HIV-positive pregnant women and then to their babies after birth, reduced the rate of mother-to-infant transmission of HIV by fully two-thirds. The results of the clinical trials constituted "one of the most dramatic discoveries of the AIDS epidemic," the New York Times declared, and one of the most heartening as well.

The new regimen--known by its study name, AIDS Clinical Trials Group (ACTG) 076 or, often, simply "076"--offered the epidemic's most vulnerable targets, newborns, their best hope thus far of a healthy childhood and a normal life span. The number of infants who might benefit from this research was significant: according to World Health Organization (WHO) figures, as many as five to ten million children born between 1990-2000 would be infected with HIV. In the mid-1990s, it was estimated that HIV-infected infants were being born at the rate of 1,000 a day worldwide.

So impressive were the findings of ACTG 076--and so substantial the difference in the transmission rate between subjects given AZT and those given a placebo (eight percent versus 25 percent)--that the clinical trials, which were still ongoing, were stopped early, and all participants in the studies were treated with AZT. In June 1994, after reviewing the study results, the US Public Health Service recommended that the 076 regimen be administered to HIV-infected pregnant women in the US as standard treatment to prevent transmission of the virus.

But while 076 was hailed as a major breakthrough, the celebration was somewhat muted. For a variety of reasons, the new treatment regimen would not likely reach those who most desperately needed it: pregnant women in the developing nations of the world and, most particularly, sub-Saharan Africa, where AIDS was wreaking devastation on a scale unimagined in the West.

I think one reason why graduate school can be so overwhelming is that you're trying to learn the basic technical skills of a field or subfield, and also playing catch-up on everything that's been written on your field, ever. True, some of it's outdated, and there are reviews that bring you up to speed on questions that are basically settled. But there's a lot of history that gets lost in the shuttle, and it's easy to forget that something was once controversial. Something as universally agreed upon today as using antiretrovirals to prevent mother-to-child transmission of HIV was once the subject of massive, heart-wrenching debate. I tend to wax pessimistic and think we're doomed to repeat the mistakes of the past regardless of whether we know our history, because we either can't agree on what the mistakes of the past were, or because past conflicts represent unavoidable differences of opinion, certainty, and power. But getting a quick refresher on the history of a is valuable because it puts current debates in perspective.

Circumcision to the Rescue?

The Atlantic's Shaun Raviv has a long article on the scale-up of male circumcision for HIV prevention in Swaziland online here. According to the article (and other sources I've read) circumcision is in demand in Swaziland, but that demand isn't necessarily driven by accurate information:

Many Swazi men want to get circumcised, “but most of them for the wrong reason,” says Bheki Vilane, the national director of Marie Stopes Swazi­land, a non-governmental organization performing circumcisions. He’s voicing the main concern about circumcision as an HIV-prevention strategy: will it make Swazi men even more sexually reckless than they are already? “Some of the men have the misconception that they’ll be 100 percent safe.” To dispel this myth, NGOs are ensuring that every patient goes through counseling before and after the procedure. Each man is told to use condoms, and also given the option to be tested for HIV, which about 85 percent agree to do.

This massive scale-up is of course based on three randomized controlled trials:

[In 2005] a randomized controlled trial in South Africa (later confirmed by studies in Uganda and Kenya) found that circumcised men are as much as 60 percent less likely to contract HIV through heterosexual sex.

What is often not mentioned is the difference between the intervention that was tested in those trials and the intervention that's being scaled up. I would summarize what the randomized trials intervention as "male circumcision with very intensive counseling on the risk of MC (many visits) in an environment where fewer of the participants had the expectation of it completely eliminating risk" vs. the counseling alone. They showed a strong and surprisingly consistent effect across the three studies.

But I would describe the intervention that's being scaled up as "male circumcision with much less intensive counseling (one visit) in an environment where many of the participants have unrealistically high expectations of risk reduction."

I'm worried that the behavioral dis-inhibition from circumcision will more than make up for the risk reduction from the procedure itself. Thus, I'm interested in seeing more data from evaluations of these programs, as well as population-level data that includes the less-well-supervised circumcision operations that are likely to spring up in response to demand.

The article quotes Dr. Vusi Magaula, chair of Swaziland's male circumcision task force, as saying, "With the highest prevalence of HIV in a population ever recorded, we have got to do something to intervene.” But does the urge to do something justify the programs being implemented, especially if there's a very real risk of harm?Unfortunately I don't think we really know the answer to that question, and only the data will tell.

Africa is Really Big

This has been going around the development blogs, but I think it's still worth posting in case you haven't seen it. The Mercator projection maps that we're so used to seeing greatly exaggerate the size of objects far from the equator while shrinking those close to it. Africa (at 11.7 million square miles) is larger than North America (9.5 million square miles) but appears roughly the same size as Greenland (a measly 0.8 million square miles). But this is the true size of Africa in comparison to the continental US, China, India, and most of Europe:

The True Size of Africa infographic

(h/t to the always fascinating Information is Beautiful)

This map also reminds me of an ODT map on display in one of the hallways at Hopkins. While it also uses the Mercator projection, it has the South as up and the North as down, showcasing just how arbitrary are designation of North as up really is. Something like this:

If any readers want to get me the Peters equal-area South-is-up map as the ultimate geek gift for the holidays, I would be eternally grateful.