The topic of comparative effectiveness research (CER) took off with a bang as the Obama Administration boosted funding for the activity in its January stimulus package by $1.1 billion, as reported in last month’s issue. Commentary from Washington officials and academics since then have focused on to what extent—if at all—evaluating the effectiveness of medical treatments will lead to some explicit or implicit rationing of healthcare dollars, and whether CER will be a factor in the ramping-up debate over healthcare reform and waste reduction in healthcare spending.
More recently, the folks at the Deloitte Center for Health Solutions (Washington, DC) have issued a report, “Comparative Effectiveness: Perspectives for Consideration,” that looks at how CER and related activities are currently being managed around the world. The best-known is, arguably, the UK’s National Institute of Comparative Effectiveness (NICE), but others are in place in Canada, Germany and Australia (among others), and numerous studies have been performed. Nor should it be overlooked that US CER work didn’t start in 2009; the Agency for Healthcare Research and Quality (AHRQ) has been developing this area for years (although its budget has now been significantly expanded).
Without going into all the details of the Deloitte report, a couple prominent findings rise up: There are many ways to conduct, evaluate and then react to CER; it is not a straightforward process of doing a study and taking action on the results, and different organizations draw different conclusions from very similar facts. In fact, it looks a lot like clinical research itself, with findings that are hinted in and that call for additional study before any conclusions can be drawn. Secondly, as Deloitte puts it in one of its concluding points, “Does it matter?”
Does it matter? implies that that all this could be merely an academic exercise—which is not Deloitte’s point. As it says, “Can innovators in the US health care market perform better through transparency or is increased pressure from a comparative effectiveness program the optimal way to assure that better health care is achieved and inappropriate variation reduced?” Deloitte highlights the difficulty of extracting meaningful guidance from CER; it also notes that rigorous CER evaluations could stifle valuable new investigations or variations of treatment dogmas.
Deloitte sums up much of these ambiguities with the theme “tools, not rules.” CER should not be expected to produce ready-made, specific guidance for treatment diseases states or types of patients. But more knowledge of comparative effectiveness outcomes can guide both industry and policymakers and ultimately make for a more cost-effective healthcare delivery system.