I recently finished reading Patient Care Under Uncertainty by econometrics giant Chuck Manski. It seems he’s turned his attention towards the surprisingly vexing issue of knowing if medical care actually works. This is a favorite topic of mine to discuss, because a shocking among of medical care has little to no robust evidence that it works. Instead, it is based on unrealistic physiological models, unidentified observational studies and RCTs on completely non-representative samples of patients (at least, according to Vinay Prasad).
In his book, Manski offers numerous approaches to analysis of medical data that are conceptually more precise and sophisticated than the slightly hack-y statistics that plague modern medical research. But most these approaches are dramatically more complex and difficult to implement. And I couldn’t help but wonder if all the work would be pointless. I strongly suspect that all of the conceptual nuance would be utterly lost in the translation to clinical practice. The new analyses would be more conceptually satisfying to econometricians and biostatisticians. But they would make more or less no impact on the actual practice of medicine, which is arguably the ultimate goal of all medical research.
To give a specific example: How should we combine data from multiple clinical trials to assess if medical therapies work and determine how well they work?
- Don’t just take the average of the point estimates from different clinical trials (current standard). This hides the imprecision in the estimates and suppresses real uncertainty.
- Reporting the range of point estimates is better, but unscientific. It reports variation in predictions across studies that have been reported in the literature. However, it doesn’t capture the potential variation that would be obtained using all plausible models, statistical specifications, data imputations, etc.
- A better approach:
- Get the data for each clinical trial (I should note that this is no small feat, even for established researchers)
- Find the partial identification region for each study, using all plausible models and imputations of missing data.
- Find the intersection of the regions.
- This intersection gives the set of possible point estimates.
- Would this ultimately change the point estimates that much? I suspect it would not.
- It would make meta-analyses substantially more difficult. It would also raise the quality of them. But is the quality-quantity tradeoff worth it? It is not clear to me. I do not have a strong prior as to whether more but weaker evidence is better than less but stronger evidence, especially since many decisions in medicine are made on the basis of no evidence at all.
- People well trained in statistics would certainly appreciate the increased uncertainty communicated by reporting an identified set rather than a point estimate with confidence intervals.
- However, I have trouble picturing any of my medical school classmates (except for the tiny handful also pursing health economics PhDs) caring about the difference between the two.
- Most likely, physicians and patients would mentally collapse the result to the midpoint of the set, just to get a point estimate. To be honest, I would too, if I was trying to communicate to other physicians or to patients about how effective a treatment was.
- And really, what’s the cost-benefit tradeoff of interpreting the results in an overly simplified manner? I suspect that the benefits of slightly over simplifying outweigh the costs. Medicine is already overwhelmingly cognitively complex. Cognitive mistakes in medicine lead to innumerable injuries and even deaths. I would rather direct my brain space towards avoiding those critical mistakes than thinking about the difference in uncertainty conveyed by point vs set identification.
One final thought: I am fully in support of using better methods whenever possible. I am just pessimistic about the potential of those statistical nuances to filter down into clinical practice and positively impact real decisions.