The alphabet soup of new and evolving reimbursement rules that connect quality healthcare improvement to reimbursement models for Medicare and some commercial payers is quite an entry in the healthcare glossary. For starters, we’re as familiar now with these acronyms as we are with our own birth dates: MACRA (the Medicare Access and CHIP Reauthorization Act of 2015), which created the QPP (Quality Payment Program), which birthed MIPS (Merit-based Incentive Payment System).
The colorful acronyms are deeply rooted in data. As a result, understanding the health data life cycle is crucial when it comes to quality reporting for MACRA and MIPS, along with myriad registries, core measures, and others. There is a lot at stake. For example, the Hospital Readmissions Reduction Program (HRRP) is an example of a program that has changed how hospitals manage their patients. For the 2017 fiscal year, around half of the hospitals in the United States were dinged with readmission penalties. Those penalties resulted in hospitals losing an estimated $528 million for fiscal year 2017.
The key to achieving new financial incentives (with red-ink consequences increasingly in play) is data that is accurate, reliable, timely, and actionable. (We’ve named this new acronym ARTA.)
The first two stages of the health data life cycle are find the data and capture the data. Stages 3 and 4 follow: Normalize the data and aggregate the data.
Normalize the data.
Data normalization - also known as dependency theory - is a common and important database management system concept introduced decades ago by a leader in database management systems, IBM computer scientist E.F. Codd. Normalization ensures the data can be more than a number or a note, but meaningful data that can form the basis for action.
One simple example of normalizing data is reconciling formats of the data. For example, a reconciling a form that lists patients’ last names first with a chart that lists the patients’ first name first. Are we abstracting data for “Doe, John O.” or “John O. Doe?” Different EHR and other systems will have different ways of recording that information. Normalization ensures that information is used in the same way.
The accuracy and reliability that results from normalization is of paramount importance. Normalization makes the information unambiguous.
Pregnant men, smoking babies ...
A misplaced or overlooked checkmark on a patient’s chart and electronic health record can create a cascade of consequences, not just for that patient but also for the physician or health system’s reimbursements. That possibility was among the topics of a humorously-titled article, “Are Your Reports Showing Pregnant Men and Smoking Babies,” appearing in the online edition of Health IT Outcomes.
What is ‘normal’ data? Normalization can range from straightforward to very difficult:
- Units: Making sure dollar items are not in thousands for some data and millions for other data. Drug doses – milligrams and milliliters? Obviously, accuracy isn’t just important for normalizing data. Accuracy is sometimes the difference between life or death.
- Syntactic structure of the data: For example, last name first or first name first? You must know how each system structures the data in order to normalize it for custom use.
- Semantic structure: How do different sources of data encode structured information – for example are symptoms all coded using SNOMED? Are they using the same version? Is there clear mapping between different standards like SNOMED, ICD 10 and LOINC?
Aggregate the data.
This step is crucial for value-based care because it consolidates the data from individual patients to groups or pools of patients. For example, if there is a pool of 100,000 lives, we can list ages, diagnosis, tests, clinical protocols, and outcomes for each patient. Aggregating the data is necessary before healthcare providers can analyze the overall impact and performance of the whole pool.
If a healthcare organization has quality and cost responsibilities for a pool of patients, they must be able to closely identify the patients that will affect the patient pool’s risks. Aggregation and analyzing provides that opportunity.
This is a crucial step – especially for managed care – since data aggregation is essential to managing groups of patients, which is at the core of managed care. Factors for successful data aggregation include: Selecting the right groups; matching and error-checking data aggregation; identifying and managing missing data elements; and using the right tools and systems.
The Improvement Activities (IA) category of the Merit-based Incentive Payment System (MIPS) has a population health subcategory that consists of 16 population management activities, and population health concepts are weaved throughout several other areas of MIPS.
Finally, once the data is located, captured, normalized, and aggregated, the next steps leading to using the data to improve patient care are report the data and understand the data.
Previously in the health data life cycle series ...
Coming up ...
- Who's Checking the Numbers? Time to Report the Data
- Are You Ready? Understand and Act on the Data
- The Health Data Life Cycle: What We've Learned Together