Testing Housing Aid

New York City,Social Science,Statistics — Zac Townsend @ December 12, 2010 5:27 pm

New York City is randomizing the people who get a certain Housing Aid program called Homebase:

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

The city’s Department of Homeless Services said the study was necessary to determine whether the $23 million program, called Homebase, helped the people for whom it was intended. Homebase, begun in 2004, offers job training, counseling services and emergency money to help people stay in their homes.

But some public officials and legal aid groups have denounced the study as unethical and cruel, and have called on the city to stop the study and to grant help to all the test subjects who had been denied assistance.

“They should immediately stop this experiment,” said the Manhattan borough president, Scott M. Stringer. “The city shouldn’t be making guinea pigs out of its most vulnerable.”

On a listserv I'm on, there has been a lot of ethical handwringing about this program, but these people weren't randomly assigned to poverty. They were randomly assigned not to receive a program.

If you agree with Stringer that citizens shouldn't be treated like lab rats, than the conclusion should be that they should receive no treatment. We have no idea whether this program is effective or not. We have no idea whether enrolling people in this program, in the long-term, might increase the time they spend homeless. We have no idea if the program leads to more crime or less. We have no idea if the program does anything. So if you're not interested in throwing people in to some unproven, untested, possibly ill-designed program at politicians' whims, the only option is to stop the intervention all-together.

Alternatively, perhaps we can test the program. We can see if the program is effective. We can learn whether the program meets its goals. Not necessarily on a cost-benefit basis, but at all. By any standard. To do that we turn to the randomized experiment.

Now, what is experimentation? In the ideal multiverse we could take the exact same people and give them intervention in one case, and not give them the intervention in the other. Then we could observe the difference and know that it was due to the Homebase program.

Absent that we have only one tool at our disposal that gets at causal inference with almost no exceptions, and that is the well-designed randomized experiment (note all the caveats because basically the randomized experiment is the gold standard, and there are SO many statistical and design tools to turn quasi-experiments and correlation studies into something approaching the ideal that NYC is implementing).

To do this you find two groups as alike as possible and you compare them. You give one of them the intervention, and you don't give it to the other group. You can't just give the program to as many people as apply and then pick some other group of people to test as a comparison. Applying is, itself, a factor you want to be equal across the groups. That's why in random experiments you tend to look for twice as many people as you can enroll, randomly enroll half of them, and then collect data on both them.

A large number of families are denied (1,500) due to lack of funding. Another way to think of the study is that there are 1,700 people rejected, and we found money to serve 200 of them. What is the best way to pick those people? The answer, to me, is the lottery. So 200 of those 1,700 families are assigned the intervention, and we randomly study another 200 of them. These two groups--all people who applied to the program--we can assume are basically similar (have something called "balance") across all observable and unobservable characteristics (we can measure the first and assume the second).

Now I'm masking a bunch of statistics that show that random assignment leads to balance, on average, but whatever. The point is that we're creating a counter-factual group of people who applied to the program but didn't get the intervention and people who did apply to the program who did. The selection was done by lottery--not by some other method such as who you're best friends with, or whether your name sounds right or whatever. Doesn't that seem like a just way to assign spots in a program?

0 Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.