Ensuring the quality of comparative effectiveness research

Publication
Article
Pharmaceutical CommercePharmaceutical Commerce - September/October 2012

As payers seek higher-quality data from real-world treatment settings, the importance of reliable measures becomes critical

Randomized clinical trials will always play an important role in the drug development process, and rightfully so. Yet as physicians’ prescribing choices are becoming increasingly constrained by payers, persuasive data from randomized clinical trials are no longer sufficient to ensure adoption of effective new products. Payers and physicians are coming to realize that data obtained from the randomized clinical trials—which will remain key to the regulatory approval process—cannot be relied upon to give insights to a product’s real-world value. For dossier submission to payers, biopharmaceutical companies must now submit other types of data to assist payer organizations in their formulary decisionmaking. A new drug’s success on the most important measures—improvement in the clinical outcomes of the patients who use these products and cost effectiveness for the organizations that are paying for them—are what drive formulary decision.

Comparative effectiveness research (CER)—an approach in which products are evaluated in a true-to-life setting against the current standard of care—is one such opportunity for the biopharmaceutical industry to demonstrate the value of its products to payer and provider organizations. Defined by the Federal Coordinating Council for Comparative Effectiveness Research, CER is “the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat, and monitor health conditions in ‘real world’ settings.”[1] There may be no explicit evaluation of cost-effectiveness, but CER invariably includes an implicit appraisal of value in areas such as safety, clinical effectiveness, quality of life and tolerability.

To survive amidst such increasing evidentiary demands, biopharmaceutical companies must create a timely evidence base that provides a variety of information with broadly accepted levels of scientific rigor appropriate for the purpose. For example, observational data on comparative effectiveness can be derived from existing data such as insurance claims; from the more granular information found in electronic medical records (EMR); from de novo prospective data collection from physicians or patients; or from some combination of these approaches. These studies have a substantial range of cost and time required for completion, but the key unifying principle is that each can address outcomes that are clinically meaningful to doctors, patients and payers.

Lack of standards

Biopharma has traditionally taken a cautious approach to CER. There is naturally a loss of control over a product once it’s in the marketplace; and researchers from other organizations (e.g., payers, clinicians, etc.) routinely conduct their own analysis of a product’s real-world benefit/risk equation. By providing information on typical patients, as well as complex patients such as the elderly who may have many comorbidities, data from this type of research should theoretically provide a more holistic, patient-centered view of the product’s performance. Yet a lack of standardized principles for the design, conduct and evaluation of observational research on comparative effectiveness leaves the industry susceptible to reliance on flawed assessments, which could over- or underestimate the true benefits and risks of a new product. Such flawed assessments could potentially be disastrous if the result is an unfavorable formulary opinion, and furthermore, could also have dramatic consequences for value-based pricing or risk-sharing pricing strategies. Payers naturally look at all types of data to make their assessments, but a lack of formal guidelines for CER and other types of observational research hinders the ability to get an accurate, reliable view of a product’s real-world performance.

There are some best practices guides that touch on observational studies of CER, yet most are too broad and don’t go into the detail required to conduct an evaluation properly. The STROBE Principles [2], for example, provide excellent guidance for the reporting of epidemiologic research, but they are not specific to comparative effectiveness research. Additionally, reporting principles are much different from those for good conduct of studies, and treating all observational studies in the same regard is simply not sensible. They can all help contribute to a broader picture, but observational studies are not all the same and each piece of evidence—no matter how or where it was generated—must be scrutinized carefully.

GRACE checklist

First published in 2010, the GRACE (Good ReseArch for Comparative Effectiveness; see [3]) Principles are designed to address some of these issues by providing a concrete set of guidelines by which good evidence can be generated. Focused on how to conduct and evaluate any type of observational CER study properly, the GRACE Principles have been converted into a checklist and then validated. [4] This validation process helps set apart the GRACE checklist from other guidance documents, nearly all of which were derived by consensus of experts, but never tested to see if they can actually distinguish high-quality, reliable research. A validated approach to recognizing observational CER research that is rigorous enough to be used in support of decisionmaking should assist payers, providers and patients in the overall assessment of a new pharmaceutical product. The GRACE Checklist is still being finalized and will be published at a later date. In the meantime, the items currently being considered for the list are shown in the Figure.

The GRACE (Guidelines for

ReseArch for Comparative

Effectiveness) Initiative,

begun in 2008, is a

collaborative effort to

develop practices for

evaluating the quality

of this type of research,

particularly when

observational studies are

involved. Evaluators can use

this checklist to measure

the quality of a clinical

study, and then make a

determination whether

to include that study in a

larger evaluative process.

For the biopharmaceutical industry, such a checklist affords the opportunity to create observational research that will withstand scrutiny by experts when evaluating evidence on new or competing products. With this knowledge, biopharmaceutical companies can ensure that they’ll have the required data and appropriate methodologies ready when needed. Indeed, companies must begin thinking about observational research and payer assessment before product approval, and ideally in the early phases of the product’s lifecycle. Initial planning should center upon evaluation of clinically relevant conditions for a broad swatch of patients, because in this era of personalized medicine, the more relevant data that can be produced, the more likely that payers adopt a new product.

The responsibility is on biopharma, therefore, to start considering CER as an early-phase function and not exclusively a late-phase one. In doing so, companies can design more meaningful studies by increasing engagement with stakeholders for priority-setting on the R&D agenda and methodology input. As payers’ influence increases, stakeholder collaboration and CER guidance harmonization are critical.

Save for rare exceptions such as expanded access programs, measuring the real-world performance of an investigational drug prior to launch will of course be impossible. However, disease epidemiology, etiology, treatment pathways and competitor real-world performance can all be captured in early development. In this way, treatment process and outcomes measures can be elucidated and refined for future studies. For instance, using a disease registry to evaluate the factors that govern propensity to prescribe/seek treatment can provide critical insights to designing late-stage CER studies and addressing such significant concerns as confounding by indication.

The industry must not only collaborate—in the early phases—with payers, policy makers, providers and patients, it must also do so in a coordinated fashion, which recognizes the interplay among them and the incentives that drive them. And in looking at the role of CER within the system, biopharma must understand the many circular, interlocking relationships at play, and proactively model research based on the downstream implications for each set of stakeholders. The industry must take a holistic view of the implications of CER. The development of novel therapies and delivery systems is critical for improving global public health; as such, they must be evaluated in the context of real-world settings and patients who are in need of care, and this information must be produced early and soundly in order to facilitate appropriate product use.

About The Author

Dr. Nancy Dreyer is Chief of Scientific Affairs and Senior Vice President at Quintiles Outcome (www.quintiles.com/outcome). She leads a team of researchers who design, conduct, and interpret observational research on comparative effectiveness and safety, and quality improvement programs. She leads the GRACE Initiative (see www.graceprinciples.org), and in 2009 joined the Steering Committee for the European Medicines Agency’s PROTECT Project for the Innovative Medicines Initiative. Dr. Dreyer has more than 25 years of experience in epidemiologic research. She trained at the University of North Carolina in Chapel Hill, earning a masters degree and doctorate in Epidemiology. Dr. Dreyer is a Fellow and Board Member of the International Society of Pharmacoepidemiology.

References:

[1] Federal Coordinating Council for Comparative Effectiveness Research. Report to the President and the Congress. June 30, 2009. Available at http://www.hhs.gov/recovery/programs/cer/cerannualrpt.pdf

[2] von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med. 2007 Oct 16;4(10):e296.

[3] Dreyer NA, Schneeweiss S, McNeil B, Berger ML, Walker A, Ollendorf D, Gliklich RE: Recognizing high-quality observational studies of comparative effectiveness. American Journal of Managed Care 2010;16(6):467-471.

[4] Submitted for publication. For more information, visit www.graceprinciples.org.

Related Content
© 2024 MJH Life Sciences

All rights reserved.