Friday, August 22, 2014

qotd: CMMI blows a billion dollars on a flawed study


Center for Medicare and Medicaid Innovation (CMMI)
Submitted 7/10/2014
Project Evaluation Activity in Support of Partnership for Patients: Task
2 Evaluation Progress Report

The Partnership for Patients (PfP) campaign was launched in April 2011
with the ambitious goals of reducing preventable hospital-acquired
conditions (HACs) by 40 percent and 30-day hospital readmissions by 20
percent. To reduce harm at this level of magnitude, the campaign
implemented a strategy to align all health care stakeholders, including
federal and other public and private health care payors, providers, and
patients, to focus on this issue concurrently. By influencing everyone
to move in the same direction at the same time, the program strove to
overcome the inherently limited reach of any single initiative operating
in a complex environment. The three major components of the campaign,
conceptualized as "engines," are the Centers for Medicare & Medicaid
Innovation (CMMI) investment engine, the federal partner alignment
engine, and the outside partner engine. The program is national in
scope, due to its level of implementation. For example, over 70 percent
of general acute care hospitals in the United States (U.S.),
representing over 80 percent of admissions, worked with PfP-funded
Hospital Engagement Networks (HENs) during 2012-2013.

Findings

The PfP campaign focuses on 11 areas of patient harm. To date, the
evaluation has found clear evidence for decreased rates of harms in five
of the eleven areas, meaning the decreases are statistically
significant, and/or meet statistical process control criteria for a
special cause decrease, and/or (in cases where only aggregated data are
available) are large in magnitude. These areas include obstetrical early
elective deliveries (OB-EED), readmissions, adverse drug events (ADE),
ventilator-associated pneumonia (VAP), and central line-associated
bloodstream infection (CLABSI). In the other six areas, to date, the
evaluation has found mixed evidence, meaning some datasets show
decreases, while others show no change, or even worsening, including
venous thromboembolism (VTE), catheter-associated urinary tract
infection (CAUTI), other OB adverse events (OB-Other), pressure ulcers,
surgical site infections (SSI), and falls.

The cost estimates available to date suggest cumulative savings of
between $3.1 to $4 billion as a result of the decreases in harms since
the baseline of 2010. Additionally, AHRQ has estimated 15,5001 deaths
averted since 2010, based on mortality rate estimates associated with
targeted harms. Tables 1 and 2 synthesize the evidence available to date
for improvement in the rate of adverse events in each of the 11 areas,
and Table 3 provides cost reduction estimates from the two available
sources of estimates to date. Since hospital payment policies and other
U.S. Department of Health & Human Services (HHS) programs that played an
important role as part of the PfP campaign were in place and making
changes over time, it is not possible at this time for the evaluation to
identify the portion of these harm reductions and savings attributable
to the PfP campaign's direct work with hospitals versus alignment of
forces for harm reduction versus other harm reduction work that would
have continued with or without PfP.

http://innovation.cms.gov/Files/reports/PFPEvalProgRpt.pdf

****

CMS.gov

About the CMS Innovation Center

The Innovation Center was established by section 1115A of the Social
Security Act (as added by section 3021 of the Affordable Care Act).
Congress created the Innovation Center for the purpose of testing
"innovative payment and service delivery models to reduce program
expenditures …while preserving or enhancing the quality of care" for
those individuals who receive Medicare, Medicaid, or Children's Health
Insurance Program (CHIP) benefits.

http://innovation.cms.gov/about/index.html

****

The New England Journal of Medicine
August 21, 2014
Did Hospital Engagement Networks Actually Improve Care?
By Peter Pronovost, M.D., Ph.D., and Ashish K. Jha, M.D., M.P.H.

Everyone with a role in health care wants to improve the quality and
safety of our delivery system. Recently, the Centers for Medicare and
Medicaid Services (CMS) released results of its Partnership for Patients
Program (PPP) and celebrated large improvements in patient outcomes. But
the PPP's weak study design and methods, combined with a lack of
transparency and rigor in evaluation, make it difficult to determine
whether the program improved care. Such deficiencies result in a failure
to learn from improvement efforts and stifle progress toward a safer,
more effective health care system.

CMS launched the PPP in December 2011 as a collaborative comprising 26
"hospital engagement networks" (HENs) representing more than 3700
hospitals, in an effort to reduce the rates of 10 types of harms and
readmissions. The HENs work to identify and disseminate effective
quality-improvement and patient-safety initiatives by developing
learning collaboratives for their member facilities, and they direct
training programs to teach hospitals how to improve patient safety. In a
February 2013 webcast, CMS announced that the rates of early elective
deliveries had dropped 48% among 681 hospitals in 20 HENs and that the
national rate of all-cause readmissions had decreased from 19% to 17.8%,
though it is unclear which HENs were included for each measure and what
time periods were the pre- and post-intervention periods.

These numbers appear impressive, but given the publicly available data
and the approach CMS used, it's nearly impossible to tell whether the
PPP actually led to better care. Three problems with the agency's
evaluation and reporting of results raise concerns about the validity of
its inferences: a weak design, a lack of valid metrics, and a lack of
external peer review for its evaluation. Though the evaluation of many
other CMS programs also lacks this basic level of rigor, given the large
public investment in the PPP, estimated at $1 billion, and the strong
public inferences about its impact, the lack of valid information about
its effects is particularly troubling.

The design of a quality-improvement program influences our ability to
make reasonable inferences about its benefits to patients. Although
individual HENs may have used more rigorous methods, the overall PPP
evaluation had three important weaknesses: it used a pre–post design
with only single points in the pre and post periods, did not have
concurrent controls, and did not specify the pre and post periods a
priori. Such an approach is highly subject to bias.

There are alternatives available, including a randomized or even a
cluster-randomized trial. If such trials were not feasible, CMS could
have used other robust design approaches, such as an interrupted
time-series study with concurrent controls. Rather than having a single
pre time period and a single post time period, this design entails
repeated measurements of the safety indicators before and after the
intervention in both HEN and non-HEN hospitals. Such an approach would
have provided more valid inferences about the effects of the program,
with few additional costs.

Beyond using a poor design, CMS did not use standardized and validated
performance measures across all participating hospitals — further
hampering inferences about the program's effects. To support engagement,
CMS allowed each HEN to define its own performance measures, with little
focus on data quality control.

CMS also required HENs and participating hospitals to submit a large
number of process measures of unknown validity. It is essential to use
validated measures — ideally those endorsed by the National Quality
Forum — unless there is a compelling reason not to. In instances where
validated measures are unavailable, instead of using poor quality
metrics, CMS can have an agency such as the Agency for Healthcare
Research and Quality (AHRQ) or the CDC develop measures rapidly.

Finally, CMS made — and presented publicly — inferences about its
program's benefits without having subjected its work to independent
evaluation or peer review. Peer review, though imperfect, is a powerful
quality control.

The PPP involved an investment of nearly $1 billion to improve care —
three times the annual budget of the AHRQ, the lead federal funding
agency for implementation science, which often lacks resources for
promising projects. With such a sizable investment, CMS could have
supported a better evaluation. It could have randomized HENs or
hospitals to receive interventions earlier or later; used standardized,
validated measures across the HENs; built in basic data quality
controls; and independently collected qualitative information alongside
quantitative data to learn not just whether the interventions worked but
also how and why they did, thereby advancing our understanding of the
mechanisms and context of improvement science. These changes would have
allowed the country to learn so much more.

The lack of a careful evaluation is symptomatic of a broader problem:
some members of the quality-improvement community eschew even modestly
rigorous methods, believing that one can simply "know" if an
intervention worked. Though maintaining hope and optimism among
clinicians is important, when untested interventions are implemented
widely, they often fail to improve care. The confidence we can have in
an intervention's efficacy is directly related to the rigor with which
it is designed, implemented, and evaluated. Given the strong desire to
improve care and the conflicts of interest we all face in evaluating our
own work, subjecting all evaluations to external examination is critical.

The field of improvement science is still in its infancy. Given the
magnitude of the quality and cost problems in health care and the amount
of money invested in mitigating these problems, the public, providers,
and policymakers need to have confidence that money used to improve care
is being well spent. It's true that improvement science requires mixed
methods and is difficult, but all good science is difficult. Failing to
attend closely to issues of design, methods, and metrics leaves us with
little confidence in an intervention. For the PPP, which required
thousands of hours of clinicians' time and large sums of money, that
lack of confidence is particularly unfortunate. More important, the
failure to generate valid, reliable information hampers our ability to
improve future interventions, because we are no closer to understanding
how to improve care than we were before the PPP. And that is the biggest
cost of all.

http://www.nejm.org/doi/full/10.1056/NEJMp1405800?query=TOC#t=article

****


Comment by Don McCanne

Another creation of the Affordable Care Act (ACA) is the Center for
Medicare and Medicaid Innovation (CMMI) - an entity established to test
innovations in payment and service delivery models designed to reduce
costs and improve quality. How is it doing?

After spending almost a billion dollars on a study designed to reduce
hospital-acquired conditions - a budget three times the total annual
budget of AHRQ (Agency for Healthcare Research and Quality) - we have
almost nothing to show for that effort and expense. As the CMMI report
states, "Since hospital payment policies and other U.S. Department of
Health & Human Services (HHS) programs that played an important role as
part of the PfP campaign were in place and making changes over time, it
is not possible at this time for the evaluation to identify the portion
of these harm reductions and savings attributable to the PfP campaign's
direct work with hospitals versus alignment of forces for harm reduction
versus other harm reduction work that would have continued with or
without PfP."

In their article on the flaws in this program, Peter Pronovost and
Ashish Jha make an observation that typifies what has been wrong with
the entire reform process centered on ACA. They state, "some members of
the quality-improvement community eschew even modestly rigorous methods,
believing that one can simply "know" if an intervention worked. Though
maintaining hope and optimism among clinicians is important, when
untested interventions are implemented widely, they often fail to
improve care."

Think of some of the prominent personalities involved in crafting and
implementing ACA and how outspoken they were and continue to be on what
they simply "know" will work - accountable care organizations, bundled
payments, pay for performance, competing exchange plans bringing us
higher quality at lower cost, placing the empowered consumer in charge
through deductibles and other cost sensitivity, and improving payment
policies through the Center for Medicare and Medicaid Innovation.

The tragedy is that much of this was to avoid adopting a program that
every informed person knows really would work - an improved Medicare for
all. It would have been far better to have directed that billion dollars
towards implementing single payer.

No comments:

Post a Comment