Ebola and health workers

It starts with familiar flu-like symptoms: a mild fever, headache, muscle and joint pains. But within days this can quickly descend into something more exotic and frightening: vomiting and diarrhoea, followed by bleeding from the gums, the nose and gastrointestinal tract.

Death comes in the form of either organ failure or low blood pressure caused by the extreme loss of fluids.

Such fear-inducing descriptions have been doing the rounds in the media lately.

However, this is not Ebola but rather Dengue Shock Syndrome, an extreme form of dengue fever, a mosquito-borne disease that struggles to make the news.

That's Seth Berkley, CEO of the GAVI Alliance, writing an opinion piece for the BBC. Berkley argues that Ebola grabs headlines not because it is particularly infectious or deadly, but because those of us from wealthy countries have otherwise forgotten what it's like to be confronted with a disease we do not know how to or cannot afford to treat.

However, in wealthy countries, thanks to the availability of modern medicines, many of these diseases can now usually be treated or cured, and thanks to vaccines they rarely have to be. Because of this blessing we have simply forgotten what it is like to live under threat of such infectious and deadly diseases, and forgotten what it means to fear them.

Ebola does combine infectiousness and rapid lethality, even with treatment, in a way that few diseases do, and it's been uniquely exoticized by books like the Hot Zone. But as Berkley and many others have pointed out, the fear isn't really justified in wealthy countries. They have health systems that can effectively contain Ebola cases if they arrive -- which I'd guess is more likely than not. So please ignore the sensationalism on CNN and elsewhere. (See for example Tara Smith on other cases when hemorraghic fevers were imported into the US and contained.)

But one way that Ebola is different -- in degree if not in kind -- to the other diseases Berkley cites (dengue, measles, childhood diseases) is that its outbreaks are both symptomatic of weak health systems and then extremely destructive to the fragile health systems that were least able to cope with it in the first place.

Like the proverbial canary in the coal mine, an Ebola outbreak reveals underlying weaknesses in health systems. Shelby Grossman highlights this article from Africa Confidential:

MSF set up an emergency clinic in Kailahun [Sierra Leone] in June but several nurses had already died in Kenema. By early July, over a dozen health workers, nurses and drivers in Kenema had contracted Ebola and five nurses had died. They had not been properly equipped with biohazard gear of whole-body suit, a hood with an opening for the eyes, safety goggles, a breathing mask over the mouth and nose, nitrile gloves and rubber boots.

On 21 July, the remaining nurses went on strike. They had been working twelve-hour days, in biohazard suits at high temperatures in a hospital mostly without air conditioning. The government had promised them an extra US$30 a week in danger money but despite complaints, no payment was made. Worse yet, on 17 June, the inexperienced Health and Sanitation Minister, Miatta Kargbo, told Parliament that some of the nurses who had died in Kenema had contracted Ebola through promiscuous sexual activity.

Only one nurse showed up for work on 22 July, we hear, with more than 30 Ebola patients in the hospital. Visitors to the ward reported finding a mess of vomit, splattered blood and urine. Two days later, Khan, who was leading the Ebola fight at the hospital and now with very few nurses, tested positive. The 43-year-old was credited with treating more than 100 patients. He died in Kailahun at the MSF clinic on 29 July...

In addition to the tragic loss of life, there's also the matter of distrust of health facilities that will last long after the epidemic is contained. Here's Adam Nossiter, writing for the NYT on the state of that same hospital in Kenema as of two days ago:

The surviving hospital workers feel the stigma of the hospital acutely.

“Unfortunately, people are not coming, because they are afraid,” said Halimatu Vangahun, the head matron at the hospital and a survivor of the deadly wave that decimated her nursing staff. She knew, all throughout the preceding months, that one of her nurses had died whenever a crowd gathered around her office in the mornings.

There's much to read on the current outbreak -- see also this article by Denise Grady and Sheri Fink (one of my favorite authors) on tracing the index patient (first case) back to a child who died in December 2013. One of the saddest things I've read about previous Ebola outbreaks is this profile of Dr. Matthew Lukwiya, a physician who died fighting Ebola in Uganda.

The current outbreak is different in terms of scale and its having reached urban areas, but if you read through these brief descriptions of past Ebola outbreaks (via Wikipedia) you'll quickly see that the transmission to health workers at hospitals is far too typical. Early transmission seems to be amplified by health facilities that weren't properly equipped to handle the disease. (See also this article article (PDF) on a 1976 outbreak.) The community and the brave health workers responding to the epidemic then pay the price.

Ebola's toll on health workers is particularly harsh given that the affected countries are starting with an incredible deficit. I was recently looking up WHO statistics on health worker density, and it struck me that the three countries at the center of the current Ebola outbreak are all close to the very bottom of rankings by health worker density. Here's the most recent figures for the ratio of physicians and nurses to the population of each country:* 

Liberia has already lost three physicians to Ebola, which is especially tragic given that there are so few Liberian physicians to begin with: somewhere around 60 (in 2008). The equivalent health systems impact in the United States would be something like losing 40,000 physicians in a single outbreak.

After the initial emergency response subsides -- which will now be on an unprecedented scale and for an unprecedented length of time -- I hope donors will make the massive investments in health worker training and systems strengthening that these countries needed prior to the epidemic. More and better trained and equipped health workers will save lives otherwise lost to all the other infectious diseases Berkley mentioned in the article linked above, but they will also stave off future outbreaks of Ebola or new diseases yet unknown. And greater investments in health systems years ago would have been a much less costly way -- in terms of money and lives -- to limit the damage of the current outbreak.  

(*Note on data: this is quick-and-dirty, just to illustrate the scale of the problem. Ie, ideally you'd use more recent data, compare health worker numbers with population numbers from the same year, and note data quality issues surrounding counts of health workers)

(Disclaimer: I've remotely supported some of CHAI's work on health systems in Liberia, but these are my personal views.)

Typhoid counterfactuals

An acquaintance (who doesn't work in public health) recently got typhoid while traveling. She noted that she had had the typhoid vaccine less than a year ago but got sick anyway. Surprisingly to me, even though she knew "the vaccine was only about 50% effective" she now felt that it was  a mistake to have gotten the vaccine. Why? "If you're going to get the vaccine and still get typhoid, what's the point?" I disagreed but am afraid my defense wasn't particularly eloquent in the moment: I tried to say that, well, if it's 50% effective and you and, I both got the vaccine, then only one of us would get typhoid instead of both of us. That's better, right? You just drew the short straw. Or, if you would have otherwise gotten typhoid twice, now you'll only get it once!

These answers weren't reassuring in part because thinking counterfactually -- what I was trying to do -- isn't always easy. Epidemiologists do this because they're typically told ad nauseum to approach causal questions by first thinking "how could I observe the counterfactual?" At one point after finishing my epidemiology coursework I started writing a post called "The Top 10 Things You'll Learn in Public Health Grad School" and three or four of the ten were going to be "think counterfactually!"

A particularly artificial and clean way of observing this difference -- between what happened and what could have otherwise happened -- is to randomly assign people to two groups (say, vaccine and placebo). If the groups are big enough to average out any differences between them, then the differences in sickness you observe are due to the vaccine. It's more complicated in practice, but that's where we get numbers like the efficacy of the typhoid vaccine -- which is actually a bit higher than 50%.

You can probably see where this is going: while the randomized trial gives you the average effect, for any given individual in the trial they might or might not get sick. Then, because any individual is assigned only to the treatment or control, it's hard to pin their outcome (sick vs. not sick) on that alone. It's often impossible to get an exhaustive picture of individual risk factors and exposures so as to explain exactly which individuals will get sick or not in advance. All you get is an average, and while the average effect is really, really important, it's not everything.

This is related somewhat to Andrew Gelman's recent distinction between forward and reverse causal questions, which he defines as follows:

1. Forward causal inference. What might happen if we do X? What are the effects of smoking on health, the effects of schooling on knowledge, the effect of campaigns on election outcomes, and so forth?

2. Reverse causal inference. What causes Y? Why do more attractive people earn more money? Why do many poor people vote for Republicans and rich people vote for Democrats? Why did the economy collapse?

The randomized trial tries to give us an estimate of the forward causal question. But for someone who already got sick, the reverse causal question is primary, and the answer that "you were 50% less likely to have gotten sick" is hard to internalize. As Gelman says:

But reverse causal questions are important too. They’re a natural way to think (consider the importance of the word “Why”) and are arguably more important than forward questions. In many ways, it is the reverse causal questions that lead to the experiments and observational studies that we use to answer the forward questions.

The moral of the story -- other than not sharing your disease history with a causal inference buff -- is that reconciling the quantitative, average answers we get from the forward questions with the individual experience won't always be intuitive.

First responses to DEVTA roll in

In my last post I highlighted the findings from the DEVTA trial of deworming in Vitamin A in India, noting that the Vitamin A results would be more controversial. I said I expected commentaries over the coming months, but we didn't have to wait that long after all. First is a BBC Health Check program features a discussion of DEVTA with Richard Peto, one of the study's authors. It's for a general audience so it doesn't get very technical, and because of that it really grated when they described this as a "clinical trial," as that has certain connotations of rigor that aren't reflected in the design of the study. If DEVTA is a clinical trial, then so was

Peto also says there were two reasons for the massive delay in publishing the trial, 1) time to check things and "get it straight," and 2) that they were " afraid of putting up a trial with a false negative." [An aside for those interested in publication bias issues: can you imagine an author with strong positive findings ever saying the same thing about avoiding false positives?!]

Peto ends by sounding fairly neutral re: Vitamin A (portraying himself in a middle position between advocates in favor and skeptics opposed) but acknowledges that with their meta-analysis results Vitamin A is still "cost-effective by many criteria."

Second is a commentary in The Lancet by Al Sommers, Keith West, and Reynaldo Martorell. A little history: Sommers ran the first big Vitamin A trials in Sumtra (published in 1986) and is the former dean of the Johns Hopkins School of Public Health.  (Sommers' long-term friendship with Michael Bloomberg, who went to Hopkins as an undergrad, is also one reason the latter is so big on public health.) For more background, here's a recent JHU story on Sommers' receiving a $1 million research prize in part for his work on Vitamin A.

Part of their commentary is excerpted below, with my highlights in bold:

But this was neither a rigorously conducted nor acceptably executed efficacy trial: children were not enumerated, consented, formally enrolled, or carefully followed up for vital events, which is the reason there is no CONSORT diagram. Coverage was ascertained from logbooks of overworked government community workers (anganwadi workers), and verified by a small number of supervisors who periodically visited randomly selected anganwadi workers to question and examine children who these workers gathered for them. Both anganwadi worker self-reports, and the validation procedures, are fraught with potential bias that would inflate the actual coverage.

To achieve 96% coverage in Uttar Pradesh in children found in the anganwadi workers' registries would have been an astonishing feat; covering 72% of children not found in the anganwadi workers' registries seems even more improbable. In 2005—06, shortly after DEVTA ended, only 6·1% of children aged 6—59 months in Uttar Pradesh were reported to have received a vitamin A supplement in the previous 6 months according to results from the National Family Health Survey, a national household survey representative at national and state level.... Thus, it is hard to understand how DEVTA ramped up coverage to extremely high levels (and if it did, why so little of this effort was sustained). DEVTA provided the anganwadi workers with less than half a day's training and minimal if any incentive.

They also note that the study funding was minimalist compared to more rigorous studies, which may be an indication of quality. And as an indication that there will almost certainly be alternative meta-analyses that weight the different studies differently:

We are also concerned that Awasthi and colleagues included the results from this study, which is really a programme evaluation, in a meta-analysis in which all of the positive studies were rigorously designed and conducted efficacy trials and thus represented a much higher level of evidence. Compounding the problem, Awasthi and colleagues used a fixed-effects analytical model, which dramatically overweights the results of their negative findings from a single population setting. The size of a study says nothing about the quality of its data or the generalisability of its findings.

I'm sure there will be more commentaries to follow. In my previous post I noted that I'm still trying to wrap my head around the findings, and I think that's still right. If I had time I'd dig into this a bit more, especially the relationship with the Indian National Family Health Survey. But for now I think it's safe to say that two parsimonious explanations for how to reconcile DEVTA with the prior research are emerging:

1. DEVTA wasn't all that rigorous and thus never achieved the high population coverage levels necessary to have a strong mortality impact; the mortality impact was attenuated by poor coverage, resulting in the lack of a statistically significant effect in line with prior results. Thus is shouldn't move our priors all that much. (Sommers et al. seem to be arguing for this.) Or,

2. There's some underlying change in the populations between the older studies and these newer studies that causes the effect of Vitamin A to decline -- this could be nutrition, vaccination status, shifting causes of mortality, etc. If you believe this, then you might discount studies because they're older.

(h/t to @karengrepin for the Lancet commentary.)

A massive trial, a huge publication delay, and enormous questions

It's been called the "largest clinical* trial ever": DEVTA (Deworming and Enhanced ViTamin A supplementation), a study of Vitamin A supplementation and deworming in over 2 million children in India, just published its results. "DEVTA" may mean "deity" or "divine being" in Hindi but some global health experts and advocates will probably think these results come straight from the devil. Why? Because they call into question -- or at least attenuate -- our estimates of the effectiveness of some of the easiest, best "bang for the buck" interventions out there. Data collection was completed in 2006, but the results were just published in The Lancet. Why the massive delay? According to the accompany discussion paper, it sounds like the delay was rooted in very strong resistance to the results after preliminary outcomes were presented at a conference in 2007. If it weren't for the repeated and very public shaming by the authors of recent Cochrane Collaboration reviews, we might not have the results even today. (Bravo again, Cochrane.)

So, about DEVTA. In short, this was a randomized 2x2 factorial trial, like so:

The results were published as two separate papers, one on Vitamin A and one on deworming, with an additional commentary piece:

The controversy is going to be more about what this trial didn't find, rather than what they did: the confidence interval on the Vitamin A study's mortality estimate (mortality ratio 0.96, 95% confidence interval of 0.89 to 1.03) is consistent with a mortality reduction as large as 11%, or as much as a 3% increase. The consensus from previous Vitamin A studies was mortality reductions of 20-30%, so this is a big surprise. Here's the abstract to that paper:

Background

In north India, vitamin A deficiency (retinol <0·70 μmol/L) is common in pre-school children and 2–3% die at ages 1·0–6·0 years. We aimed to assess whether periodic vitamin A supplementation could reduce this mortality.

Methods

Participants in this cluster-randomised trial were pre-school children in the defined catchment areas of 8338 state-staffed village child-care centres (under-5 population 1 million) in 72 administrative blocks. Groups of four neighbouring blocks (clusters) were cluster-randomly allocated in Oxford, UK, between 6-monthly vitamin A (retinol capsule of 200 000 IU retinyl acetate in oil, to be cut and dripped into the child’s mouth every 6 months), albendazole (400 mg tablet every 6 months), both, or neither (open control). Analyses of retinol effects are by block (36 vs36 clusters).

The study spanned 5 calendar years, with 11 6-monthly mass-treatment days for all children then aged 6–72 months.  Annually, one centre per block was randomly selected and visited by a study team 1–5 months after any trial vitamin A to sample blood (for retinol assay, technically reliable only after mid-study), examine eyes, and interview caregivers. Separately, all 8338 centres were visited every 6 months to monitor pre-school deaths (100 000 visits, 25 000 deaths at ages 1·0–6·0 years [the primary outcome]). This trial is registered at ClinicalTrials.gov, NCT00222547.

Findings

Estimated compliance with 6-monthly retinol supplements was 86%. Among 2581 versus 2584 children surveyed during the second half of the study, mean plasma retinol was one-sixth higher (0·72 [SE 0·01] vs 0·62 [0·01] μmol/L, increase 0·10 [SE 0·01] μmol/L) and the prevalence of severe deficiency was halved (retinol <0·35 μmol/L 6% vs13%, decrease 7% [SE 1%]), as was that of Bitot’s spots (1·4% vs3·5%, decrease 2·1% [SE 0·7%]).

Comparing the 36 retinol-allocated versus 36 control blocks in analyses of the primary outcome, deaths per child-care centre at ages 1·0–6·0 years during the 5-year study were 3·01 retinol versus 3·15 control (absolute reduction 0·14 [SE 0·11], mortality ratio 0·96, 95% CI 0·89–1·03, p=0·22), suggesting absolute risks of death between ages 1·0 and 6·0 years of approximately 2·5% retinol versus 2·6% control. No specific cause of death was significantly affected.

Interpretation

DEVTA contradicts the expectation from other trials that vitamin A supplementation would reduce child mortality by 20–30%, but cannot rule out some more modest effect. Meta-analysis of DEVTA plus eight previous randomised trials of supplementation (in various different populations) yielded a weighted average mortality reduction of 11% (95% CI 5–16, p=0·00015), reliably contradicting the hypothesis of no effect.

Note that instead of just publishing these no-effect results and leaving the meta-analysis to a separate publication, the authors go ahead and do their own meta-analysis of DEVTA plus previous studies and report that -- much attenuated, but still positive -- effect in their conclusion. I think that's a fair approach, but also reveals that the study's authors very much believe there are large Vitamin A mortality effects despite the outcome of their own study!

[The only media coverage I've seen of these results so far comes from the Times of India, which includes quotes from the authors and Abhijit Banerjee.]

To be honest, I don't know what to make of the inconsistency between these findings and previous studies, and am writing this post in part to see what discussion it generates. I imagine there will be more commentaries on these findings over the coming months, with some decrying the results and methodologies and others seeing vindication in them. In my view the best possible outcome is an ongoing concern for issues of external validity in biomedical trials.

What do I mean? Epidemiologists tend to think that external validity is less of an issue in randomized trials of biomedical interventions -- as opposed to behavioral, social, or organizational trials -- but this isn't necessarily the case. Trials of vaccine efficacy have shown quite different efficacy for the same vaccine (see BCG and rotavirus) in different locations, possibly due to differing underlying nutritional status or disease burdens. Our ability to interpret discrepant findings can only be as sophisticated as the available data allows, or as sophisticated as allowed by our understanding of the biological and epidemiologic mechanisms that matter on the pathway from intervention to outcome. We can't go back in time and collect additional information (think nutrition, immune response, baseline mortality, and so forth) on studies far in the past, but we can keep such issues in mind when designing trials moving forward.

All that to say, these results are confusing, and I look forward to seeing the global health community sort through them. Also, while the outcomes here (health outcomes) are different from those in the Kremer deworming study (education outcomes), I've argued before that lack of effect or small effects on the health side should certainly influence our judgment of the potential education outcomes of deworming.

*I think given the design it's not that helpful to call this a 'clinical' trial at all - but that's another story.

On deworming

GiveWell's Alexander Berger just posted a more in-depth blog review of the (hugely impactful) Miguel and Kremer deworming study. Here's some background: the Cochrane reviewGivewell's first response to it, and IPA's very critical response. I've been meaning to blog on this since the new Cochrane review came out, but haven't had time to do the subject justice by really digging into all the papers. So I hope you'll forgive me for just sharing the comment I left at the latest GiveWell post, as it's basically what I was going to blog anyway:

Thanks for this interesting review — I especially appreciate that the authors [Miguel and Kremer] shared the material necessary for you [GiveWell] to examine their results in more depth, and that you talk through your thought process.

However, one thing you highlighted in your post on the new Cochrane review that isn’t mentioned here, and which I thought was much more important than the doubts about this Miguel and Kremer study, was that there have been so many other studies that did not find large effect on health outcomes! I’ve been meaning to write a long blog post about this when I really have time to dig into the references, but since I’m mid-thesis I’ll disclaim that this quick comment is based on recollection of the Cochrane review and your and IPA’s previous blog posts, so forgive me if I misremember something.

The Miguel and Kremer study gets a lot of attention in part because it had big effects, and in part because it measured outcomes that many (most?) other deworming studies hadn’t measured — but it’s not as if we believe these outcomes to be completely unrelated. This is a case where what we believe the underlying causal mechanism for the social effects to be is hugely important. For the epidemiologists reading, imagine this as a DAG (a directed acyclic graph) where the mechanism is “deworming -> better health -> better school attendance and cognitive function -> long-term social/economic outcomes.” That’s at least how I assume the mechanism is hypothesized.

So while the other studies don’t measure the social outcomes, it’s harder for me to imagine how deworming could have a very large effect on school and social/economic outcomes without first having an effect on (some) health outcomes — since the social outcomes are ‘downstream’ from the health ones. Maybe different people are assuming that something else is going on — that the health and social outcomes are somehow independent, or that you just can’t measure the health outcomes as easily as the social ones, which seems backwards to me. (To me this was the missing gap in the IPA blog response to GiveWell’s criticism as well.)

So continuing to give so much attention to this study, even if it’s critical, misses what I took to be the biggest takeaway from that review — there have been a bunch of studies that showed only small effects or none at all. They were looking at health outcomes, yes, but those aren’t unrelated to the long-term development, social, and economic effects. You [GiveWell] try to get at the external validity of this study by looking for different size effects in areas with different prevalence, which is good but limited. Ultimately, if you consider all of the studies that looked at various outcomes, I think the most plausible explanation for how you could get huge (social) effects in the Miguel Kremer study while seeing little to no (health) effects in the others is not that the other studies just didn’t measure the social effects, but that the Miguel Kremer study’s external validity is questionable because of its unique study population.

(Emphasis added throughout)

 

Still #1

Pop quiz: what's the leading killer of children under five? Before I answer, some background: my impression is that many if not most public health students and professionals don't really get politics. And specifically, they don't get how an issue being unsexy or just boring politics can results in lousy public policy. I was discussing this shortcoming recently over dinner in Addis with someone who used to work in public health but wasn't formally trained in it. I observed, and they concurred, that students who go to public health schools (or at least Hopkins, where this shortcoming may be more pronounced) are mostly there to get technical training so that they can work within the public health industry, and that more politically astute students probably go for some other sort of graduate training, rather than concentrating on epidemiology or the like.

The end result is that you get cadres of folks with lots of knowledge about relative disease burden and how to implement disease control programs, but who don't really get why that knowledge isn't acted upon. On the other hand, a lot of the more politically savvy folks who are in a position to, say, set the relative priority of diseases in global health programming -- may not know much about the diseases themselves. Or, maybe more likely, they do the best job they can to get the most money possible for programs that are both good for public health and politically popular.  But if not all diseases are equally "popular" this can result in skewed policy priorities.

Now, the answer to that pop quiz: the leading killer of kids under 5 is.... [drumroll]...  pneumonia!

If you already knew the answer to that question, I bet you either a) have public health training, or b) learned it due to recent, concerted efforts to raise pneumonia's public profile. On this blog the former is probably true (after all I have a post category called "methodological quibbles"), but today I want to highlight the latter efforts.

To date, most of the political class and policymakers get the pop quiz wrong, and badly so. At Hopkins' school of public health I took and enjoyed Orin Levine's vaccine policy class. (Incidentally, Orin just started a new gig with the Gates Foundation -- congrats!) In that class and elsewhere I've heard Orin tell the story of quizzing folks on Capitol Hill and elsewhere in DC about the top three causes of death for children under five and time and again getting the answer "AIDS, TB and malaria."

Those three diseases likely pop to mind because of the Global Fund, and because a lot of US funding for global health has been directed at them. And, to be fair, they're huge public health problems and the metric of under-five mortality isn't where AIDS hits hardest. But the real answer is pneumonia, diarrhea, and malnutrition. (Or malaria for #3 -- it depends in part on whether you count malnutrition as a separate cause  or a contributor to other causes). The end result of this lack of awareness -- and the prior lack of a domestic lobby -- of pneumonia is that it gets underfunded in US global health efforts.

So, how to improve pneumonia's profile? Today, November 12th, is the 4th annual World Pneumonia Day, and I think that's a great start. I'm not normally one to celebrate every national or international "Day" for some causes, but for the aforementioned reasons I think this one is extremely important. You can follow the #WPD2012 hashtag on Twitter, or find other ways to participate on WPD's act page. While they do encourage donations to the GAVI Alliance, you'll notice that most of the actions are centered around raising awareness. I think that makes a lot of sense. In fact, just by reading this blog post you've already participated -- though of course I hope you'll do more.

I think politically-savvy efforts like World Pneumonia Day are especially important because they bridge a gap between the technical and policy experts. Precisely because so many people on both sides (the somewhat-false-but-still-helpful dichotomy of public health technical experts vs. political operatives) mostly interact with like-minded folks, we badly need campaigns like this to popularize simple facts within policy circles.

If your reaction to this post -- and to another day dedicated to a good cause -- is to feel a bit jaded, please recognize that you and your friends are exactly the sorts of people the World Pneumonia Day organizers are hoping to reach. At the very least, mention pneumonia today on Twitter or Facebook, or with your policy friends the next time health comes up.

---

Full disclosure: while at Hopkins I did a (very small) bit of paid work for IVAC, one of the WPD organizers, re: social media strategies for World Pneumonia Day, but I'm no longer formally involved. 

A misuse of life expectancy

Jared Diamond is going back and forth with Acemoglu and Robinson over his review of their new book, Why Nations Fail. The exchange is interesting in and of itself, but I wanted to highlight one passage from Diamond's response:

The first point of their four-point letter is that tropical medicine and agricultural science aren’t major factors shaping national differences in prosperity. But the reasons why those are indeed major factors are obvious and well known. Tropical diseases cause a skilled worker, who completes professional training by age thirty, to look forward to, on the average, just ten years of economic productivity in Zambia before dying at an average life span of around forty, but to be economically productive for thirty-five years until retiring at age sixty-five in the US, Europe, and Japan (average life span around eighty). Even while they are still alive, workers in the tropics are often sick and unable to work. Women in the tropics face big obstacles in entering the workforce, because of having to care for their sick babies, or being pregnant with or nursing babies to replace previous babies likely to die or already dead. That’s why economists other than Acemoglu and Robinson do find a significant effect of geographic factors on prosperity today, after properly controlling for the effect of institutions.

I've added the bolding to highlight an interpretation of what life expectancy means that is wrong, but all too common.

It's analagous to something you may have heard about ancient Rome: since life expectancy was somewhere in the 30s, the Romans who lived to be 40 or 50 or 60 were incredibly rare and extraordinary. The problem is that life expectancy -- by which we typically mean life expectancy at birth -- is heavily skewed by infant mortality, or deaths under one year of age. Once you get to age five you're generally out of the woods -- compared to the super-high mortality rates common for infants (less than one year old) and children (less than five years old). While it's true that there were fewer old folks in ancient Roman society, or -- to use Diamond's example -- modern Zambian society, the difference isn't nearly as pronounced as you might think given the differences in life expectancy.

Does this matter? And if so, why? One area where it's clearly important is Diamond's usage in the passage above: examining the impact of changes in life expectancy on economic productivity. Despite the life expectancy at birth of 38 years, a Zambian male who reaches the age of thirty does not just have eight years of life expectancy left -- it's actually 23 years!

Here it's helpful to look at life tables, which show mortality and life expectancy at different intervals throughout the lifespan. This WHO paper by Alan Lopez et al. (PDF) examining mortality between 1990-9 in 191 countries provides some nice data: page 253 is a life table for Zambia in 1999. We see that males have a life expectancy at birth of just 38.01 years, versus 38.96 for females (this was one of the lowest in the world at that time). If you look at that single number you might conclude, like Diamond, that a 30-year old worker only has ~10 years of life left. But the life expectancy for those males remaining alive at age 30 (64.2% of the original birth cohort remains alive at this age) is actually 22.65 years. Similarly, the 18% of Zambians who reach age 65, retirement age in the US, can expect to live an additional 11.8 years, despite already having lived 27 years past the life expectancy at birth.

These numbers are still, of course, dreadful -- there's room for decreasing mortality at all stages of the lifespan. Diamond's correct in the sense that low life expectancy results in a much smaller economically active population. But he's incorrect when he estimates much more drastic reductions in the economically productive years that workers can expect once they reach their economically productive 20s, 30s, and 40s.

----

[Some notes: 1. The figures might be different if you limit it to "skilled workers" who aren't fully trained until age 30, as Diamond does; 2. I'm also assumed that Diamond is working from general life expectancy, which was similar to 40 years total, rather than a particular study that showed 10 years of life expectancy at age 30 for some subset of skilled workers, possibly due to high HIV prevalence -- that seems possible but unlikely; 3. In these Zambia estimates, about 10% of males die before reaching one year of age, or over 17% before reaching five years of age. By contrast, between the ages of 15-20 only 0.6% of surviving males die, and you don't see mortality rates higher than the under-5 ones until above age 85!; and 4. Zambia is an unusual case because much of the poor life expectancy there is due to very high HIV/AIDS prevalence and mortality -- which actually does affect adult mortality rates and not just infant and child mortality rates. Despite this caveat, it's still true that Diamond's interpretation is off. ]

The great quant race

My Monday link round-up included this Big Think piece asking eight young economists about the future of their field. But, I wanted to highlight the response from Justin Wolfers:

Economics is in the midst of a massive and radical change.  It used to be that we had little data, and no computing power, so the role of economic theory was to “fill in” for where facts were missing.  Today, every interaction we have in our lives leaves behind a trail of data.  Whatever question you are interested in answering, the data to analyze it exists on someone’s hard drive, somewhere.  This background informs how I think about the future of economics.

Specifically, the tools of economics will continue to evolve and become more empirical.  Economic theory will become a tool we use to structure our investigation of the data.  Equally, economics is not the only social science engaged in this race: our friends in political science and sociology use similar tools; computer scientists are grappling with “big data” and machine learning; and statisticians are developing new tools.  Whichever field adapts best will win.  I think it will be economics.  And so economists will continue to broaden the substantive areas we study.  Since Gary Becker, we have been comfortable looking beyond the purely pecuniary domain, and I expect this trend towards cross-disciplinary work to continue.

I think it's broadly true that economics will become more empirical, and that this is a good thing, but I'm not convinced economics will "win" the race. This tracks somewhat with the thoughts from Marc Bellemare that I've linked to before: his post on "Methodological convergence in the social sciences" is about the rise of mathematical formalism in social sciences other than economics. This complements the rise of empirical methods, in the sense that while they are different developments, both are only possible because of the increasing mathematical, statistical, and coding competency of researchers in many fields. And I think the language of convergence is more likely to represent what will happen (and what is already happening), rather than the language of a "race."

We've already seen an increase in RCTs (developed in medicine and epidemiology) in economics and political science, and the decades ahead will (hopefully) see more routine serious analysis of observational data in epidemiology and other fields (in the sense that the analysis is more careful about causal inference), and  advanced statistical techniques and machine learning methodologies will become commonplace across all fields as researchers deal with massive, complex longitudinal datasets gleaned not just from surveys but increasingly from everyday collection.

Economists have a head start in that their starting pool of talent is generally more mathematically competent than other social sciences' incoming PhD classes. But, switching back to the "race" terminology, economics will only "win" if -- as Wolfers speculates will happen -- it can leverage theory as a tool for structuring investigation. My rough impression is that economic theory does play this role, sometimes, but it has also held empirical investigation in economics back at times, perhaps through publication bias (see on minimum wage) against empirical results that don't fit the theory, and possibly more broadly through a general closure of routes of investigation that would not occur to someone already trained in economic theory.

Regardless, I get the impression that if you want to be a cutting-edge researcher in any social science you should be beefing up not only your mathematical and statistical training, but also your coding practice.

Update: Stevenson and Wolfers expand their thoughts in this excellent Bloomberg piece. And more at Freakonomics here.

Stats lingo in econometrics and epidemiology

Last week I came across an article I wish I'd found a year or two ago: "Glossary for econometrics and epidemiology" (PDF from JSTOR, ungated version here) by Gunasekara, Carter, and Blakely. Statistics is to some extent a common language for the social sciences, but there are also big variations in language that can cause problems when students and scholars try to read literature from outside their fields. I first learned epidemiology and biostatistics at a school of public health, and now this year I'm taking econometrics from an economist, as well as other classes that draw heavily on the economics literature.

Friends in my economics-centered program have asked me "what's biostatistics?" Likewise, public health friends have asked "what's econometrics?" (or just commented that it's a silly name). In reality both fields use many of the same techniques with different language and emphases. The Gunasekara, Carter, and Blakely glossary linked above covers the following terms, amongst others:

  • confounding
  • endogeneity and endogenous variables
  • exogenous variables
  • simultaneity, social drift, social selection, and reverse causality
  • instrumental variables
  • intermediate or mediating variables
  • multicollinearity
  • omitted variable bias
  • unobserved heterogeneity

If you've only studied econometrics or biostatistics, chances are at least some of these terms will be new to you, even though most have roughly equivalent forms in the other field.

Outside of differing language, another difference is in the frequency with which techniques are used. For instance, instrumental variables seem (to me) to be under-used in public health / epidemiology applications. I took four terms of biostatistics at Johns Hopkins and don't recall instrumental variables being mentioned even once! On the other hand, economists just recently discovered randomized trials. (Now they're more widely used) .

But even within a given statistical technique there are important differences. You might think that all social scientists doing, say, multiple linear regression to analyze observational data or critiquing the results of randomized controlled trials would use the same language. In my experience they not only use different vocabulary for the same things, they also emphasize different things. About a third to half of my epidemiology coursework involved establishing causal models (often with directed acyclic graphs)  in order to understand which confounding variables to control for in a regression, whereas in econometrics we (very!) briefly discussed how to decide which covariates might cause omitted variable bias. These discussions were basically about the same thing, but they differed in terms of language and in terms of emphasis.

I think an understanding of how and why researchers from different fields talk about things differently helps you to understand the sociology and motivations of each field.  This is all related to what Marc Bellemare calls the ongoing "methodological convergence in the social sciences." As research becomes more interdisciplinary -- and as any applications of research are much more likely to require interdisciplinary knowledge -- understanding how researchers trained in different academic schools think and talk will become increasingly important.

Princeton epidemiology: norovirus edition

Princeton is in the midst of an outbreak of norovirus! What's norovirus, you ask? Well, it looks like this:

Not helpful? Here's the CDC fact sheet:

Noroviruses (genus Norovirus, family Caliciviridae) are a group of related, single-stranded RNA, non-enveloped viruses that cause acute gastroenteritis in humans. The most common symptoms of acute gastroenteritis are diarrhea, vomiting, and stomach pain. Norovirus is the official genus name for the group of viruses previously described as “Norwalk-like viruses” (NLV).

Noroviruses spread from person to person, through contaminated food or water, and by touching contaminated surfaces. Norovirus is recognized as the leading cause of foodborne-disease outbreaks in the United States. Outbreaks can happen to people of all ages and in a variety of settings. Read more about it using the following links.

My shorter translation: "Got an epidemic of nasty stomach problems in an institutional setting (like a nursing home or university)? It's probably norovirus. Wash your hands a lot."

The all-campus email I received earlier today is included below. Think of this as a real-time, less-sexy version of the CDC's MMWR. Emphasis added:

To: Princeton University community

Date: Feb. 6, 2012

From: University Health Services and Environmental Health and Safety

Re: Update: Campus Hygiene Advisory

In light of continuing cases of gastroenteritis on campus, University Health Services and the Office of Environmental Health and Safety want to remind faculty, staff and students about increased attentiveness to personal hygienic practices.

A few of the recent cases have tested positive for norovirus, which is a common virus that causes gastroenteritis.  While it is usually not serious and most people recover in a few days, gastroenteritis can cause periods of severe sickness and can be highly contagious. You can prevent the spread of illness by practicing good hygiene, such as frequent hand washing, and limiting contact with others if sick.

Gastroenteritis includes symptoms of diarrhea, vomiting and abdominal cramps. Please take the following steps if you are experiencing symptoms:

--Ill students should refrain from close contact with others and contact University Health Services at 609-258-3129 or visit McCosh Health Center on Washington Road. Ill employees are encouraged to stay home and contact their personal physicians for medical assistance.

--Wash your hands frequently and carefully with soap and warm water, and always after using the bathroom.

--Refrain from close contact with others until symptoms have subsided, or as advised by medical staff.

--Do not handle or prepare food for others while experiencing symptoms and for two-to-three days after symptoms subside.

--Increase your intake of fluids, such as tea, water, sports drinks and soup broth, to prevent dehydration.

--Avoid sharing towels, beverage bottles, food, and eating utensils and containers.

--Clean and disinfect soiled surfaces with bleach-based cleaning products. Students and others on campus who need assistance with cleaning and disinfecting soiled surfaces may call Building Services at 609-258-8000. Building Services also will be increasing disinfection of frequent touch points, such as doorknobs and restroom fixtures.

--Clean all soiled clothes and linen. Soiled linen should be washed and dried in the hottest temperature recommended by the linen manufacturer.

In the past week, University Health Services has seen more than the usual number of students experiencing symptoms of acute gastroenteritis. The New Jersey Department of Health and Senior Services tested samples from a few of the cases, which were later found positive for norovirus. Because norovirus has been identified as the chief cause of gastroenteritis currently on campus, further testing is not planned at this time, but the University is urging community members to take steps to prevent the further spread of illness.

Noroviruses are the most common causes of gastroenteritis in the United States, according to the Center for Disease Control and Prevention. Anyone can become infected with gastroenteritis and presence of the illness may sometimes increase during winter months. While most people get better in a few days, gastroenteritis can be serious in young children, the elderly and people with other health conditions. Frequent hand washing with soap and warm water is your best defense against most communicable disease.

I bolded a few passages because I think the very last sentence (wash your hands) is actually the most important single part of the message and is much clearer than encouraging someone to increase "attentiveness to personal hygienic practices." But still a good message overall. At least one friend has come down with this and it sounds unpleasant...

Testing treatments in policy

The students at the Woodrow Wilson School have a group blog on public policy called 14 Points. I've been helping promote the blog for a while but just got around to writing my first submission this week. It's titled "Testing Treatments: Building a culture of evidence in public policy". Here's an excerpt:

Similar lessons can be gleaned from the history of surgical response to breast cancer. In The Emperor of All Maladies (2010), a new history of cancer, oncologist Siddhartha Mukherjee chronicles the history of such failed interventions as the radical mastectomy. Over a period of decades this brutal procedure – removing the breasts, lymph nodes, and much of the chest muscles – became the tool of choice for surgeons treating breast cancer. In the 1970s rigorous trials comparing radical mastectomy to more limited procedures showed that this terribly disfiguring procedure did not in fact help patients live longer at all. Some surgeons refused to believe the evidence – to believe it would have required them to acknowledge the harm they had done. But eventually the radical mastectomy fell from favor; today it is quite rare. Many similar stories are included in a free e-book titled Testing Treatments (2011).

As a society we’ve come to accept that medical devices should be tested by the most rigorous and neutral means possible, because the stakes are life and death for all of us. Thousands of people faced with deadly illnesses volunteer for clinical trials every year. Some of them survive while others do not, but as a society we are better off when we know what actually works. For every downside, like the delay of a promising treatment until evidence is gathered properly, there is an upside – something we otherwise would have thought is a good idea is revealed not to be helpful at all.

Under normal circumstances most new drugs are weeded out as they face a gauntlet of tests for safety and efficacy required before FDA licensure. The stories of the humanitarian-exemption stent and the radical mastectomy are different because these procedures became more widely used before there was rigorous evidence that they helped, though in both cases there were plenty of anecdotes, case studies, and small or non-controlled studies that made it look like they did. This haphazard, post-hoc testing is analogous to how policy in many other fields, from welfare to education, is developed. Many public policy decisions have considerable impacts on our livelihoods, education, and health. Why are we not similarly outraged by poor standards of evidence that leads to poor outcomes in other fields?

Read the rest at 14 Points, and check out the posts by my classmates.

Genesis

I highly recommend Patient Zero, the  latest episode of the podcast RadioLab. It covers Typhoid Mary, the origin of HIV, and the diffusion of ideas. Evocative as always, but what I like the most is how they add new information to stories you think you know. For one, you really feel sorry for Mary. And I've read quite a bit on the origin of HIV (a great way to learn more about phylogenetics!) but RadioLab takes it back even further and highlights some research I hadn't seen. Related: I haven't read it yet, but Tyler Cowen really likes Jacques Pepin's new book, The Origin of AIDS -- more happy reading for Christmas break.

Discarding efficacy?

Andrew Grove, former CEO of Intel, writes an editorial in Science:

We might conceptualize an “e-trial” system along similar lines. Drug safety would continue to be ensured by the U.S. Food and Drug Administration. While safety-focused Phase I trials would continue under their jurisdiction, establishing efficacy would no longer be under their purview. Once safety is proven, patients could access the medicine in question through qualified physicians. Patients' responses to a drug would be stored in a database, along with their medical histories. Patient identity would be protected by biometric identifiers, and the database would be open to qualified medical researchers as a “commons.” The response of any patient or group of patients to a drug or treatment would be tracked and compared to those of others in the database who were treated in a different manner or not at all.

Alex Tabarrok of Marginal Revolution (who is a big advocate for FDA reform, running this site) really likes the idea. I hate it. While the current system has some problems, Grove's system would be much, much worse than the current system. The biggest problem is that we would have no good data about whether a drug is truly efficacious, because all of the results in the database would be confounded by selection bias. Getting a large sample size and having subgroups tells you nothing about why someone got the treatment in the first place.

Would physicians pay attention to peer-reviewed articles and reviews identifying the best treatments for specific groups? Or would they just run their own analyses? I think there would be a lot of the latter, which is scary since many clinicians can’t even define selection bias or properly interpret statistical tests. The current system has limitations, but Grove's idea would move us even further from any sort of evidence-based medicine.

Other commenters at Marginal Revolution rightly note that it's difficult to separate safety from efficacy, because recommending a drug is always based on a balance of risks and benefits. Debilitating nausea or strong likelihood of heart attack would never be OK in a drug for mild headaches, but if it cures cancer the standards are (and should be) different.

Derek Lowe, a fellow Arkansan who writes the excellent chemistry blog In The Pipeline, has more extensive (and informed) thoughts here.

Update (1/5/2012): More criticism, summarized by Derek Lowe.

Happy Hep Day

Today is the first ever WHO-sponsored World Hepatitis Day:

These successes and challenges are amplified because viral hepatitis is not a single disease. Hepatitis is caused by at least five viruses—including two spread by water or food contaminated with feces(hepatitis A and E) and three transmitted by blood and body fluids (hepatitis B, D, and C) during childbirth (from infected mother to child); through injecting drug use, needle sticks, or transfusions; or through sexual contact. Hepatitis B and C infections can cause cirrhosis of the liver and lead to liver cancer.

Today, more than 500 million persons worldwide are living with viral hepatitis and do not have adequate access to care—increasing their risk for premature death from liver cirrhosis and liver cancer. Each year, more than 1 million people die from viral hepatitis and millions of new infections add to this global burden of disease and death.

It is not, however, the first ever World Hepatitis Day – it’s just the first one recognized by WHO. Many of these international attention-raising events grow out of smaller things which pick up steam and eventually get official recognition from international organizations. It turns out that World Hepatitis Day has been going on for several years.

On a related note, did you know that Hep B is a cause of discrimination in China, and that there is a burgeoning carriers’ rights movement? I didn’t either until I started browsing the impressively worked out Wikipedia Hepatitis B page (some epidemiologist had a field day) and found that there’s an entire page for Hep B in China. An excerpt:

Discrimination

Hepatitis B sufferers in China frequently face discrimination in all aspects of life and work. For example, many Chinese employers and universities refuse to accept anyone who tests positive. Some kindergartens refuse admission to children who are carriers of the virus. The hepatitis problem is a reflection of the vast developmental gap between China's rural and urban areas. The largest problem facing Chinese people infected with HBV is that illegal blood testing is required by most employers in China.[17] Following an incident involving a Hepatitis B carrier's killing of an employer and other calls against discriminatory employment practices, China's ministries of health and personnel announced that Hepatitis B carriers must not be discriminated against when seeking employment and education.[18] While the laws exist to protect the privacy of employees and job seekers, many believe that they are not enforced.

"In the Hepatitis B Camp"

"In the Hepatitis B Camp" is a popular website for hepatitis B carriers' human rights in China. Its online forum is the world's biggest such forum with over 300,000 members. The website was first shut down by the Chinese government in November 2007. Lu Jun, the head of the rights group, managed to reopen the website by moving it to an overseas server, but the authorities in May 2008 began blocking access to the website within China, only 10 days after government officials participated in an event for World Hepatitis Day at the Great Wall of China. An official had told the head of the rights group, Lu Jun, at the time that the closure was due to the Beijing Olympic Games.[19]

(h/t to Tom)

Football epidemiology

In an attempt to prove Cowen's First Law -- "there is literature on everything" -- I enjoy highlighting unusual epidemiological studies (see tornado epidemiology, for one.) These studies may seem a bit odd until you start thinking like an epidemiologist: measurement is the first step to control. The latest issue of Pediatrics has a new study by Thomas et al. on the "Epidemiology of Sudden Death in Young, Competitive Athletes Due to Blunt Trauma." Some of the methods seem a bit sketchy, but that's kind of the authors' point as they note,

"without a systematic and mandatory reporting system for sudden cardiac deaths in young competitive athletes, the true absolute number of these events that occur in the United States cannot be known."

While this study is mostly concerned with the sudden deaths not caused by cardiac events, the same principle holds true: if anything, the problem is under-reported.

Thomas et al. use 30 years of data from the "US National Registry of Sudden Death in Young Athletes," looking at 1980–2009. Deaths in the database came from a variety of sources including LexisNexis searches, news media accounts assembled by other commercial search services, web searches, reports from the US Consumer Product Safety Commission and the National Center for Catastrophic Sports Injury Research, and direct reports from schools and parents.

Of the total deaths included in the study, about 261 were caused by trauma, or around 9 deaths per year. 57% of the 261 deaths were in a single sport, football. Notably, there were about four times as many deaths due to cardiac causes as to trauma.

In football they find defensive positions have more deaths than offensive positions, "presumably because such players commonly initiate and deliver high-velocity blows while moving toward the point of contact." While the majority of deaths were in defensive players, the single most represented position was running backs.

Why the focus on deaths in young athletes? The authors note by comparison that lightning causes about 50 deaths per year, and motor vehicle injuries case 12,000 deaths per year. (Aside: You can tell the authors don't work in injury prevention since they say "motor vehicle accident" rather than "injury" -- injury prevention researchers prefer the latter terminology because they believe "accidental" deaths sound unavoidable.) The authors explain their own focus by noting that these sudden deaths attract "considerable media attention, with great importance to the physician and lay communities, particularly given the youthful age and apparent good health of the victims."

In related news: "The Ivy League [announced that...] in an effort to minimize head injuries among its football players, it will sharply reduce the number of allowable full-contact practices teams can hold."

Measles is big this year

The CDC just put out a Health Advisory describing measles' big comeback. Though endemic transmission is the US has been interrupted, but importations keep happening when the unvaccinated population travels or come into contact with travelers:

The United States is experiencing a high number of reported measles cases in 2011, many of which were acquired during international travel. From January 1 through June 17 this year, 156 confirmed cases of measles were reported to CDC. This is the highest reported number since 1996. Most cases (136) were associated with importations from measles-endemic countries or countries where large outbreaks are occurring. The imported cases involved unvaccinated U.S. residents who recently traveled abroad, unvaccinated visitors to the United States, and people linked to these imported cases. To date, 12 outbreaks (3 or more linked cases) have occurred, accounting for 47% of the 156 cases. Of the total case-patients, 133 (85%) were unvaccinated or had undocumented vaccination status. Of the 139 case-patients who were U.S. residents, 86 (62%) were unvaccinated, 30 (22%) had undocumented vaccination status, 11 (8%) had received 1 dose of measles-mumps-rubella (MMR) vaccine, 11 (8%) had received 2 doses, and 1 (1%) had received 3 (documented) doses.

Measles was declared eliminated in the United States in 2000 due to our high 2-dose measles vaccine coverage, but it is still endemic or large outbreaks are occurring in countries in Europe (including France, the United Kingdom, Spain, and Switzerland), Africa, and Asia (including India). The increase in measles cases and outbreaks in the United States this year underscores the ongoing risk of importations, the need for high measles vaccine coverage, and the importance of prompt and appropriate public health response to measles cases and outbreaks.

Measles is a highly contagious, acute viral illness that is transmitted by contact with an infected person through coughing and sneezing. After an infected person leaves a location, the virus remains contagious for up to 2 hours on surfaces and in the air. Measles can cause severe health complications, including pneumonia, encephalitis, and death.

The message is simple: parents should vaccinate their children because not doing so has serious health effects not only on those children, but also on those who are unable to be vaccinated because they are either too young or have medical contraindications. If everyone who believed (wrongly) that vaccines are unsafe would move to one country (let's call it Unvaccinstan) then the choice would have fewer ethical pitfalls: you make a bad choice, and your kids might get sick. But as it is there are many people who simply can't get vaccinated -- kids with cancer for example, or kids in the window between when your maternal antibodies aren't that effective against measles but still interfere with the vaccine -- so the choice has much broader societal impact. I imagine that many of the parents who choose not to vaccinate -- who are often of higher educational status and more liberal politics -- view themselves  as virtuous; the reality is sadly the opposite.

Lead poisoning in China

It's a huge problem -- the Times calls it a Hidden Scourge:

Here, Chinese leaders have acknowledged that lead contamination is a grave issue and have raised the priority of reducing heavy-metal pollution in the government’s latest five-year plan, presented in March. But despite efforts to step up enforcement, including suspending production last month at a number of battery factories, the government’s response remains faltering.

At a meeting last month of China’s State Council, after yet another disclosure of mass poisoning, Prime Minister Wen Jiabao scolded Environmental Minister Zhou Shengxian for the lack of progress, according to an individual with high-level government ties who spoke on the condition of anonymity.

The government has not ordered a nationwide survey of children’s blood lead levels, so the number of children who are at risk is purely a matter of guesswork. Mass poisonings like that at the Haijiu factory typically come to light only after suspicious parents seek hospital tests, then alert neighbors or co-workers to the alarming results.

And relevant to my current work, which I hope to write about more soon.

Tornado epidemiology

The news out of Joplin, Missouri is heartbreaking, and it comes so quickly on the heels of the tornadoes that hit Tuscaloosa, Alabama. Central Arkansas, where I grew up, gets hit by tornadoes every spring, so I have plenty of memories of taking shelter in response to warnings. College nights with social plans ruined when we had to hunker down in an interior hallways. Dark, roiling clouds circling and the spooky calm when the rain and hail stop but the winds stay strong. Racing home from work to get to my house and its basement -- a rarity in the South -- before a particularly ominous storm hit. Neighboring communities were sometimes hit more directly by storms, and Harding students often participated in clean-up an recovering efforts, but my town was spared direct hits by the heaviest tornadoes. So what does epidemiology have to say about tornadoes? Their paths aren't exactly random, in the sense that some areas are more prone to storms that produce tornadoes. Growing up I knew where to take shelter: interior hallways away from windows if your house didn't have a basement or a dedicated storm shelter. I also knew that mobile homes were a particularly bad place to be, and that the carnage was always worst when a tornado happened to hit a mobile home lot.

But there is some interesting research out there that tells us more than you might think. Obviously and thankfully you can't do a randomized trial assigning some communities to get storms and others not, so the evidence of how to prevent tornado-related injury and death is mostly observational. What do we know? I'm not an expert on this but I did a quick, non-systematic scan and here's what I found:

First, the annual tornado mortality rate has actually gone down quite a lot over the last few decades. That says nothing about the frequency and intensity of tornadoes themselves, which is a matter for meteorologists to research. The actual number deaths resulting from tornadoes would probably be a function of the number of people in the US, where they live and whether those areas are prone to tornadoes, the frequency and intensity of the tornadoes, and risk factors for people in the affected area once the tornado hits.

This NOAA site has the following graph of tornado mortality where the vertical axis is tornado deaths per million people in the US (on a log scale) and the horizontal axis covers 1875 - 2008.

Second, many of the risk factors for tornado injury and death are intuitive and suggest possible interventions to minimize risk in tornado-prone areas. Following tornadoes in North and South Carolina in 1984, Eidson et al. surveyed people hospitalized and family members of people who were killed, along with uninjured persons who were present when the surveyed individuals were hurt. The main types of injury were deep cuts, concussions, unconsciousness and broken bones. Risk factors included living in mobile homes, "advanced age (60+ years), no physical protection (not having been covered with a blanket or other object), having been struck by broken window glass or other falling objects, home lifted off its foundation, collapsed ceiling or floor, or walls blown away." Some of those patterns might indicate potential tornado education interventions -- better shelters for mobile home residents, targeting alerts to older residents, covering with a blanket, and staying in interior hallways, to say nothing of building codes to make more survivable structures.

Third, some things are less clear, like whether it's safe to be in a car during a tornado. Daley et al. did a case-control study of tornado injuries and deaths in the aftermath of tornadoes in Oklahoma in 1999. They found higher risk of tornado death for those in mobile homes (odds ratio of 35.3, 95% CI 7.8 - 175.6) or outdoors (odds ratio of 141.2, 95% CI 15.9 - a whopping 6,379.8) compared to other houses. They found no difference in risk of death, severe injury, or minor injury among people in cars vs. those in houses. And they found that risk of death, severe injury, or minor injury was actually lower among those "fleeing their homes in motor vehicles than among those remaining." That's surprising to me, and contrary to much of the tornado-related safety warnings I heard from meteorologists and family growing up. I wonder if this particular study goes against the majority of findings, or whether there is a consensus based in data at all.

Fourth, our knowledge of tornadoes can be messy. One demographic approach to tornado risk factors (Donner 2007) is to look for correlations between tornado fatalities and injuries with rural population, population density, household size, racial minorities, deprivation/poverty, tornado watches and warnings, and mobile homes. Donner noted that "Findings suggest a strong relationship between the size of a tornado path and both fatalities and injuries, whereas other measures related to technology, population, and organization produce significant yet mixed results."

That's just a sampling of the literature on tornado epidemiology. The studies are interesting but relatively rare, at least from initial perusal. That's probably because tornado deaths and injuries are relatively rare in the US. Still, the storms themselves are terrifying and they often wreak havoc on a single community and thus generate more sympathy and news coverage than a more frequent -- and thus less extraordinary -- problem like car crashes.

Update: NYT has an interesting article about tornado preparedness, including some speculation on why the Joplin tornado was so bad.

Sentinel chickens

"In May 2000 Canadian Health authorities stationed cages of sentinel chickens along 2500 km (1550 miles) of the border with the United States in an effort to identify the presence of West Nile virus in susceptible animals before the disease was detected in humans in Canada. Ultimately, the sentinel chickens were key in detecting a new viral epidemic."

Source here. And then there are "Super Sentinel" Chickens...

Incentives?

From a lab assignment for my Professional Epidemiology Methods course:

...but part of this exercise is to remember that public health practice does not happen in a vacuum.  And if you do your job well, nothing happens and you may be blamed for interrupting daily life activities.  If you do not do your job well, people get sick or die--and you still get blamed.