Gates and Media Funding

You may or may not have heard of this controversy: the Gates Foundation -- a huge funding source in global health -- has been paying various media sources to ramp up their coverage of global health and development issues. It seems to me that various voices in global health have tended to respond to this as you might expect them to, based on their more general reactions to the Gates Foundation. If you like most of Gates does, you probably see this as a boon, since global health and development (especially if you exclude disaster/aid stories) aren't the hottest issues in the media landscape. If you're skeptical of the typical Gates Foundation solutions (technological fixes, for example) then you might think this is more problematic.

I started off writing some lengthy thoughts on this, and realized Tom Paulson at Humanosphere has already said some of what I want to say. So I'll quote from him a bit, and then finish with a few more of my own thoughts. First, here is an interview Paulson did with Kate James, head of communications at the Gates Foundation. An excerpt:

Q Why does the Gates Foundation fund media?

Kate James: It’s driven by our recognition of the changing media landscape. We’ve seen this big drop-off in the amount of coverage of global health and development issues. Even before that, there was a problem with a lack of quality, in-depth reporting on many of these issues so we don’t see this as being internally driven by any agenda on our part. We’re responding to a need.

Q Isn’t there a risk that by paying media to do these stories the Gates Foundation’s agenda will be favored, drowning out the dissenting voices and critics of your agenda?

KJ: When we establish these partnerships, everyone is very clear that there is total editorial independence. How these organizations choose to cover issues is completely up to them.

The most recent wave of controversy seems to stem from Gates funding going to an ABC documentary on global health that featured clips of Bill and Melinda Gates, among other things. Paulson writes about that as well. Reacting to a segment on Guatemala, Paulson writes:

For example, many would argue that part of the reason for Guatemala’s problem with malnutrition and poverty stems from a long history of inequitable international trade policies and American political interference (as well as corporate influence) in Central America.

The Gates Foundation steers clear of such hot-button political issues and we’ll see if ABC News does as well. Another example of a potential “blind spot” is the Seattle philanthropy’s tendency to favor technological solutions — such as vaccines or fortified foods — as opposed to messier issues involving governance, industry and economics.

A few additional thoughts:

Would this fly in another industry? Can you imagine a Citibank-financed investigative series on the financial industry? That's probably a bad example for several reasons, including the Citibank-Gates comparison and the fact that the financial industry is not underreported. I'm having a hard time thinking of a comparable example: an industry that doesn't get much news coverage, where a big actor funded the media -- if you can think of an example, please let me know.

Obviously this induces a bias in the coverage. To say otherwise is pretty much indefensible to me. Think of it this way: if Noam Chomsky had a multi-billion dollar foundation that gave grants to the media to increase news coverage of international development, but did not have specific editorial control, would that not still bias the resulting coverage? Would an organization a) get those grants if it were not already likely to do the cover the subject with at last a gentle, overall bias towards Chomsky's point of view, or b) continue to get grants for new projects if they widely ridiculed Chomsky's approach? It doesn't have to be Chomsky -- take your pick of someone with clearly identifiable positions on international issues, and you get the same picture. Do the communications staffers at the Gates Foundation need to personally review the story lines for this sort of bias to creep in? Of course not.

Which matters more: the bias or the increased coverage? For now I lean towards increased coverage, but this is up for debate. It's really important that the funding be disclosed (as I understand it has been). It would also be nice if there was enough public demand for coverage of international development that the media covered it in all its complexity and difficulty and nuance without needing support from a foundation, but that's not the world we live in for now. And maybe the funded coverage will ultimately result in more discussion of the structural and systemic roots of international inequality, rather than just "quick fixes."

[Other thoughts on Gates and media funding by Paul Fortner, the Chronicle of Philanthropy, and (older) LA Times.]

Randomizing in the USA, ctd

[Update: There's quite a bit of new material on this controversy if you're interested. Here's a PDF of Seth Diamond's testimony in support of (and extensive description of) the evaluation at a recent hearing, along with letters of support from a number of social scientists and public health researchers. Also, here's a separate article on the City Council hearing at which Diamond testified, and an NPR story that basically rehashes the Times one. Michael Gechter argues that the testing is wrong because there isn't doubt about whether the program works, but, as noted in the comments there, doesn't note that denial-of-service was already part of the program because it was underfunded.] A couple weeks ago I posted a link to this NYTimes article on a program of assistance for the homeless that's currently being evaluated by a randomized trial. The Poverty Action Lab blog had some discussion on the subject that you should check out too.

The short version is that New York City has a housing assistance program that is supposed to keep people from becoming homeless, but they never gave it a truly rigorous evaluation. It would have been better to evaluate it up front (before the full program was rolled out) but they didn't do that, and now they are.  The policy isn't proven to work, and they don't have resources to give it to everyone anyway, so instead of using a waiting list (arguably a fair system) they're randomizing people into receiving the assistance or not, and then tracking whether they end up homeless. If that makes you a little uncomfortable, that's probably a good thing -- it's a sticky issue, and one that might wrongly be easier to brush aside when working in a different culture. But I think on balance it's still a good idea to evaluate programs when we don't know if they actually do what they're supposed to do.

The thing I want to highlight for now is the impact that the tone and presentation of the article impacts your reactions to the issue being discussed. There's obviously an effect, but I thought this would be a good example because I noticed that the Times article contains both valid criticisms of the program and a good defense of why it makes sense to test it.

I reworked the article by rearranging the presentation of those sections. Mostly I just shifted paragraphs, but in a few cases I rearranged some clauses as well. I changed the headline, but otherwise I didn't change a single word, other than clarifying some names when they were introduced in a different order than in the original. And by leading with the rationale for the policy instead of with the emotional appeal against it, I think the article gives a much different impression. Let me know what you think:

City Department Innovates to Test Policy Solutions

By CARA BUCKLEY with some unauthorized edits by BRETT KELLER

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

New York City is among a number of governments, philanthropies and research groups turning to so-called randomized controlled trials to evaluate social welfare programs.

The federal Department of Housing and Urban Development recently started an 18-month study in 10 cities and counties to track up to 3,000 families who land in homeless shelters. Families will be randomly assigned to programs that put them in homes, give them housing subsidies or allow them to stay in shelters. The goal, a HUD spokesman, Brian Sullivan, said, is to find out which approach most effectively ushered people into permanent homes.

The New York study involves monitoring 400 households that sought Homebase help between June and August. Two hundred were given the program’s services, and 200 were not. Those denied help by Homebase were given the names of other agencies — among them H.R.A. Job CentersHousing Court Answers and Eviction Intervention Services — from which they could seek assistance.

The city’s Department of Homeless Services said the study was necessary to determine whether the $23 million program, called Homebase, helped the people for whom it was intended. Homebase, begun in 2004, offers job training, counseling services and emergency money to help people stay in their homes.

The department, added commissioner Seth Diamond, had to cut $20 million from its budget in November, and federal stimulus money for Homebase will end in July 2012.

Such trials, while not new, are becoming especially popular in developing countries. In India, for example, researchers using a controlled trial found that installing cameras in classrooms reduced teacher absenteeism at rural schools. Children given deworming treatment in Kenya ended up having better attendance at school and growing taller.

“It’s a very effective way to find out what works and what doesn’t,” said Esther Duflo, an economist at the Massachusetts Institute of Technology who has advanced the testing of social programs in the third world. “Everybody, every country, has a limited budget and wants to find out what programs are effective.”

The department is paying $577,000 for the study, which is being administered by the City University of New York along with the research firm Abt Associates, based in Cambridge, Mass. The firm’s institutional review board concluded that the study was ethical for several reasons, said Mary Maguire, a spokeswoman for Abt: because it was not an entitlement, meaning it was not available to everyone; because it could not serve all of the people who applied for it; and because the control group had access to other services.

The firm also believed, she said, that such tests offered the “most compelling evidence” about how well a program worked.

Dennis P. Culhane, a professor of social welfare policy at the University of Pennsylvania, said the New York test was particularly valuable because there was widespread doubt about whether eviction-prevention programs really worked.

Professor Culhane, who is working as a consultant on both the New York and HUD studies, added that people were routinely denied Homebase help anyway, and that the study was merely reorganizing who ended up in that pool. According to the city, 5,500 households receive full Homebase help each year, and an additional 1,500 are denied case management and rental assistance because money runs out.

But some public officials and legal aid groups have denounced the study as unethical and cruel, and have called on the city to stop the study and to grant help to all the test subjects who had been denied assistance.

“They should immediately stop this experiment,” said the Manhattan borough president, Scott M. Stringer. “The city shouldn’t be making guinea pigs out of its most vulnerable.”

But, as controversial as the experiment has become, Mr. Diamond said that just because 90 percent of the families helped by Homebase stayed out of shelters did not mean it was Homebase that kept families in their homes. People who sought out Homebase might be resourceful to begin with, he said, and adept at patching together various means of housing help.

Advocates for the homeless said they were puzzled about why the trial was necessary, since the city proclaimed the Homebase program as “highly successful” in the September 2010 Mayor’s Management Report, saying that over 90 percent of families that received help from Homebase did not end up in homeless shelters. One critic of the trial, Councilwoman Annabel Palma, is holding a General Welfare Committee hearing about the program on Thursday.

“I don’t think homeless people in our time, or in any time, should be treated like lab rats,” Ms. Palma said.

“This is about putting emotions aside,” [Mr. Diamond] said. “When you’re making decisions about millions of dollars and thousands of people’s lives, you have to do this on data, and that is what this is about.”

Still, legal aid lawyers in New York said that apart from their opposition to the study’s ethics, its timing was troubling because nowadays, there were fewer resources to go around.

Ian Davie, a lawyer with Legal Services NYC in the Bronx, said Homebase was often a family’s last resort before eviction. One of his clients, Angie Almodovar, 27, a single mother who is pregnant with her third child, ended up in the study group denied Homebase assistance. “I wanted to cry, honestly speaking,” Ms. Almodovar said. “Homebase at the time was my only hope.”

Ms. Almodovar said she was told when she sought help from Homebase that in order to apply, she had to enter a lottery that could result in her being denied assistance. She said she signed a letter indicating she understood. Five minutes after a caseworker typed her information into a computer, she learned she would not receive assistance from the program.

With Mr. Davie’s help, she cobbled together money from the Coalition for the Homeless and a public-assistance grant to stay in her apartment. But Mr. Davie wondered what would become of those less able to navigate the system. “She was the person who didn’t fall through the cracks,” Mr. Davie said of Ms. Almodovar. “It’s the people who don’t have assistance that are the ones we really worry about.”

Professor Culhane said, “There’s no doubt you can find poor people in need, but there’s no evidence that people who get this program’s help would end up homeless without it.”

Randomizing in the USA

The NYTimes posted this article about a randomized trial in New York City:

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

Dean Karlan at Innovations for Policy Action responds:

It always amazes me when people think resources are unlimited. Why is "scarce resource" such a hard concept to understand?

I think two of the most important points here are that a) there weren't enough resources for everyone to get the services anyway, so they're just changing the decision-making process for who gets the service from first-come-first-served (presumably) to randomized, and b) studies like this can be ethical when there is reasonable doubt about whether a program actually helps or not. If it were firmly established that the program is beneficial, then it's unethical to test it, which is why you can't keep testing a proven drug against placebo.

However, this is good food for thought for those who are interested in doing randomized trials of development initiatives in other countries. It shows the impact (and reactions) from individuals to being treated as "test subjects" here in the US -- and why should we expect people in other countries to feel differently? That said, a lot of randomized trials don't get this sort of pushback. I'm not familiar with this program beyond what I read in this article, but it's possible that more could have been done to communicate the purpose of the trial to the community, activists, and the media.

There are some interesting questions raised in the IPA blog comments as well.

Results-Based Aid

Nancy Birsdall writes "On Not Being Cavalier About Results" about a recent critique of the UK's DFID (Department for International Development):

The fear about an insistence on results arises from confusion about what “results” are. A legitimate typical concern is that aid bureaucracies pressed for “results” will resort, more than already is the case, to projects that provide inputs that seem add up to easily measured “wins” (bednets delivered, books distributed, paramedics trained, vehicles or computers purchased, roads built) while neglecting “system” issues and “institution building”. But bednets and books and vehicles and roads are not results in any meaningful sense, and the connection between these inputs and real outcomes (healthier babies, better educated children, higher farmer income) goes through systems and institutions and is often lost....

Let us define results as measured gains in what children have learned by the end of primary school, or measured reductions in infant mortality or deforestation, or measured increases in the hours of electricity available, or annual increases in revenue from taxes paid by rich households in poor countries – or a host of other indicators that ultimately add up to the transformation of societies and the end of their dependence on outside aid. For a country to get results might not require more money but a reconfiguration of local politics, the cleaning up of bureaucratic red tape, local leadership in setting priorities or simply more exposure to the force of local public opinion. Let aid be more closely tied to well-defined results that recipient countries are aiming for; let donors and recipients start measuring and reporting those results to their own citizens; let there be continuous evaluation and learning about the mechanics of how recipient countries and societies get those results (their institutional shifts, their system reforms, their shifting politics and priorities), built on the transparency that Secretary Mitchell is often emphasizing.

(Emphasis added)

I'd also like to note that Birdsall is the founding director of the Center for Global Development, a nonprofit in DC that does a lot of work related to evidence-based aid. I relied fairly heavily on their report on "Closing the Evaluation Gap" on a recent dual degree app. The full report is worth the read.

Aid Workers vs. Journalists?

UPDATE: I mistakenly assumed the commenter name "ansel" was a pseudonym, so my comments on anonymity in the final paragraph may not be as applicable. Updates in brackets. Interesting debate going on at Tales From the Hood: First, J (the anonymous aid worker blogger behind Tales), wrote "Dear Journalists: What to look for in aid programs," which includes suggestions like "Understand that you cannot evaluate a project, program or organization during one-day visit....Ask about learning.... Ask about outcomes....Use logic... Understand ambiguity...[and] Understand that things are almost never the way they seem." The summary sounds pretty basic, but the details aren't necessarily as simple.

Which prompted a lengthy comment from someone named ansel [Ansel Herz of MediaHacker]: "Dear aid groups, Do not invite us on one-day tours of your programs and expect them to be useful to us in any way.... We need to be able to come out to where you’re working unannounced and talk with you – your people in the field....Do not send out press releases over and over simply listing off the sheer numbers of stuff you’ve distributed or have stocked in warehouses as if it indicates how much you’ve accomplished. Quality of life is not measured by those (nearly impossible to verify independently) numbers." Etc.

J responded at length with a follow-up post (that probably stand alone if you're only going to read one link). There are several points of agreement -- on NGOs needing to be more open, for example -- but the main disagreement is over "supply and demand" of lousy, feel-good information. Do NGOs give it to journalists because the journalists demand it, or do journalists take it from NGOs because it's all they can get? (I know, a bit simplified -- so check out the links.)

Of course, some of the debate  was prompted by J and [Ansel]'s tone, which is unfortunate. While I understand the necessity of anonymous blogging, I think this debate is one where the tones would have been slightly different -- and more productive -- had both writers been commenting under their own names. Still, seeing the [partially] anonymous back-and-forth gives you an idea of the animus that can exist between the different actors.

Afraid

Here are two semi-related articles: one by William Easterly about how aid to Ethiopia is propping up an oppressive regime, and another by Rory Carroll on the pernicious but well-intentioned effects of aid tourism in Haiti. Basically, it's really hard to do things right, because international aid and development are not simple. Good intentions are not enough. You can mess up by funneling all your money through a central regime, or by having an uncoordinated, paternalistic mess.

A couple confessions. First, I'm a former "aid tourist." In high school and college I went on short-term trips to Mexico, Guyana, and Zambia (and slightly different experiences elsewhere). My church youth group went to Torreon, Mexico and helped build a church (problematize that). In Guyana and Zambia I was part of medical groups that ostensibly aimed to improve the health of the local people; in hindsight neither project could have possibly had any lasting effects on health, and likely fostered dependency.

Second, I'm an aspiring public health / development professional, and I'm afraid. I don't want to be the short-term, uncoordinated, reinventing-the-wheel, well-intention aid vacationer -- and I think given my education (and the experience I hope to continually gain) I'm more likely to avoid at least some of those shortcomings. But I'm scared that my work might prop up nasty regimes, or satiate a bloated aid industry that justifies its projects to sustain itself, or give me the false impression of doing good while actually doing harm.

I think the first step to doing better is being afraid of these things, but I'm still learning where to go from here.