Monday, September 17, 2007

Do We Really Know What Makes Us Healthy?
Once upon a time, women took estrogen only to relieve the hot flashes, sweating, vaginal dryness and the other discomforting symptoms of menopause. In the late 1960s, thanks in part to the efforts of Robert Wilson, a Brooklyn gynecologist, and his 1966 best seller, “Feminine Forever,” this began to change, and estrogen therapy evolved into a long-term remedy for the chronic ills of aging. Menopause, Wilson argued, was not a natural age-related condition; it was an illness, akin to diabetes or kidney failure, and one that could be treated by taking estrogen to replace the hormones that a woman’s ovaries secreted in ever diminishing amounts. With this argument estrogen evolved into hormone-replacement therapy, or H.R.T., as it came to be called, and became one of the most popular prescription drug treatments in America.

By the mid-1990s, the American Heart Association, the American College of Physicians and the American College of Obstetricians and Gynecologists had all concluded that the beneficial effects of H.R.T. were sufficiently well established that it could be recommended to older women as a means of warding off heart disease and osteoporosis. By 2001, 15 million women were filling H.R.T. prescriptions annually; perhaps 5 million were older women, taking the drug solely with the expectation that it would allow them to lead a longer and healthier life. A year later, the tide would turn. In the summer of 2002, estrogen therapy was exposed as a hazard to health rather than a benefit, and its story became what Jerry Avorn, a Harvard epidemiologist, has called the “estrogen debacle” and a “case study waiting to be written” on the elusive search for truth in medicine.

Many explanations have been offered to make sense of the here-today-gone-tomorrow nature of medical wisdom — what we are advised with confidence one year is reversed the next — but the simplest one is that it is the natural rhythm of science. An observation leads to a hypothesis. The hypothesis (last year’s advice) is tested, and it fails this year’s test, which is always the most likely outcome in any scientific endeavor. There are, after all, an infinite number of wrong hypotheses for every right one, and so the odds are always against any particular hypothesis being true, no matter how obvious or vitally important it might seem.

I post this in part because it's a good article and explains in good laymen's terms many of the problems with epi research, and because it goes into why most research is probably wrong (though see a counter argument here). Gold standards are tough to come by, even case-controlled double-blind placebo-controlled clinical trials. Definitely read the whole thing closely, even when it gets a bit techy.

Ought to provide a corrective to much thinking in archaeology when what is itself a "gold standard" research methodology is called into broad question; and archaeologists usually have more problems with sample size and representativeness, and far less control over measurement adequacy.

I like to think we're somewhat more immune to "consensus" thinking, given that finding consensus on nearly anything is virtually impossible anyway, given the rebelliousness of anthropologists in general. Even when "Clovis First" was something one might be considered "consensus", by far the majority of archaeologists would readily concede that it was only so because evidence of previous habitations weren't there (of course, there were rebels on both sides, some arguing that the standards of evidence were too high, others that they were too low). But still, I would argue that what consensus there was had more to do with data absence than presence. The same strictures would apply to Overkill, though outside of archaeology proper, it's probably "consensus" that humans killed off the megafauna.

Some of this isn't strictly applicable to the archaeological question, obviously; you wouldn't need a large sample of pre-Clovis sites to demonstrate pre-Clovis occupation, just one that was securely dated. On the converse, a hundred poorly dated sites don't add up to one good one. But the correlation <> causation warning should probably be given far more credence that it usually is.

Eh, that's going a bit far afield. It's a good article.