What Factors Can Determine Ms Drg Assignment

Pages

What the Heck is a DRG? And Why Should I Care About Case Mix?

So you want to be a coder. And not just that, you want to be a hospital coder because, on average, they make more money than physician coders. And you don't just want to be a hospital coder, you want to be an inpatient hospital coder because then you get to look at the whole chart and piece together the patient's clinical picture. If this is your goal, then everything you need to know you will not learn in school. And that's mainly because there is so much to learn and practical experience is key.

Most of all, if you want to be an inpatient coder, you need to know diagnosis-related groups (DRGs) because in hospitals, it's all about DRGs and case mix - and compliance. If you have no idea what I'm talking about, fear not - here's a primer on DRGs! I wish I could say I cover it all here, but this is just a beginning!

What is a DRG?
The ICD-9-CM coding system contains about 16,000 diagnosis codes and ICD-10-CM contains over 68,000 codes. Imagine trying to determine a payment amount for each individual condition. And that doesn't include accounting for procedures. The most logical solution is to create a system that allows for broader classification of conditions and services for easier comparison and assignment into payment categories. DRGs were created for this purpose. I look at DRGs as a way to "organize the junk drawer" where patients are grouped into different categories based on similar conditions and cost to treat the patient.

History
DRGs were first developed at Yale University in 1975 for the purpose of grouping together patients with similar treatments and conditions for comparative studies. On October 1, 1983, DRGs were adopted by Medicare as a basis of payment for inpatient hospital services in order to attempt to control hospital costs. Since then, the original DRG system has been changed and advanced by various companies and agencies and represents a rather generic term. These days, we have various DRG systems in use - some proprietary and some a matter of public record - all of which group patients in different ways. Two of the main DRG systems currently in use are the Medicare Severity DRG (MS-DRGs) and 3M's All Patient Refined DRGs (APR-DRGs). Different DRG systems are used by different payers.

How to Get a DRG
All DRG systems are a little different, but the basic premise is the same. DRGs are based on codes. In effect, DRGs are codes made up of codes. The following elements are taken into consideration when grouping a DRG:
  • ICD-9-CM diagnosis codes
  • ICD-9-CM procedure codes
  • Discharge disposition
  • Patient gender
  • Patient age
  • Coding definitions as defined by the Uniform Hospital Discharge Data Set (UHDDS) - in other words, the sequence of codes on the claim
Back in the 80s, DRGs were grouped manually using decision trees. These days, DRGs are grouped with the touch of a button and DRG groupers are a big part of encoding software. But I would be doing you a disservice if I didn't at least give you an idea of the grouper logic. As I mentioned, there are different DRG systems and probably the most popular is the MS-DRG system, so I will explain how MS-DRG grouper logic works.

MS-DRG Grouper Logic
The first step in assigning an MS-DRG is to classify the case into one of the 25 major diagnostic categories (MDC). These MDCs are based on the principal (first) diagnosis and, with a few exceptions, are based on body systems, such as the female reproductive system. Five MDCs are not based on body systems (injuries, poison and toxic effect of drugs; burns; factors influencing health status (V codes); multiple significant trauma; and human immunodeficiency virus infection). Organ transplant cases are not assigned to MDCs, but are immediately classified based on procedure, rather than diagnosis. These are called pre-MDC DRGs.

Once a case has been assigned into an MDC (with the exception of the transplant pre-MDCs), it is determined to be either medical or surgical. Surgical cases require more resource consumption (that's industry speak for "costs more!"), so they must be separated from the medical cases. If there are no procedure codes on the case (e.g., a patient with pneumonia may have no procedure codes), then it's simple - it's a medical case. But if the patient had a procedure, that procedure may or may not be considered surgical. For example, an appendectomy is quite clearly a surgical procedure. But something like suturing a laceration is not. It's all based on resource consumption - the cost of performing the procedure. In general, anything requiring an operating room is surgical.

Quick sidebar here - this is why skin debridement is such a hot topic in the world of coding compliance. Nonexcisional debridement (code 86.28) groups as a medical case. However, excisional debridement (code 86.22) groups as a surgical case and the change in reimbursement is rather drastic.

Okay, so now that we have our MDC and a designation as medical or surgical, we need to look at the other diagnoses on the claim. Right now, Medicare is able to process the first 9 diagnoses on the claim (even though 18 are reportable). These other diagnoses, depending on their severity, may be designated as complications and comorbidities (CCs) or major complications and comorbidities (MCCs). Medicare maintains lists of CCs and MCCs and updates them annually. CCs and MCCs are conditions that have been identified as significantly impacting hospital costs for treating patient with those conditions. For example, it's been determined that congestive heart failure without further specification does not significantly impact costs and it is not a CC/MCC. However, patients with chronic systolic or diastolic heart failure do have slightly higher costs, so those conditions are CCs. More so, patients with acute systolic or diastolic heart failure have even higher costs, so they are designated as MCCs. Are you beginning to see how slight changes in a physician's diagnostic statement impact coding and thus payment?

DRG Weights
Okay, so we know the MDC, whether the case is medical or surgical, and whether or not there are any CCs or MCCs. How does that translate into reimbursement? Well, if you're using an encoder (and if you code for a hospital, you will), you hit a button and presto! You have a DRG with a relative weight. Now if only you knew what that relative weight meant. The DRG relative weight is the average amount of resources it takes to treat a patient in that DRG. Huh?

Let me demonstrate. The baseline relative weight is 1 and represents average resource consumption for all patients. Anything less than 1 uses less than average resources. Anything above 1 uses more than average resources. So let's compare some respiratory MS-DRGs:
  • MS-DRG for lung transplant has a relative weight of 9.3350
  • MS-DRG for simple pneumonia (no CC/MCC) has a relative weight of 0.7096
  • MS-DRG for chronic obstructive pulmonary disease with an MCC has a weight of 1.1924
You can see how different combinations of codes lead to different MS-DRGs with different relative weights. In order to convert that into monetary terms, we multiply the relative weight by the hospital base rate. Now I'm sure you want to know how to get that hospital base rate. Me too. Well, up to a point. The base rate is exclusive to each hospital and takes a lot of historical, facility-specific data into account, like what they've been paid in the past, whether or not they are an urban or rural hospital, and how much the hospital pays out in wages. That's just more math than my poor little head can comprehend! So for the purposes of this exercise, let's pretend like this hospital - we'll call it Happyville Hospital - has a base rate of $5000. So if we multiply the relative weights above by $5000, our reimbursement for those cases, respectively, is $46,675, $3,548, and $5,962.

Case Mix
You just might be asked in an interview if you understand case mix. It's a good indication of whether or not someone really understands DRGs. And I have to admit, in my sometimes sadistic manner, I like seeing that look of glazed-over confusion on someone's face when I bring up case mix. But case mix is simple. It's the average relative weight for a hospital. So get out a big piece of paper for your hospital and start writing down the relative weights for every single case and then divide to get your average. Okay, so it's computerized now. But that's all case mix is - an average.

In the industry, we officially refer to case mix as the type of patients a hospital treats. Let's say at Happyville, we have a high volume of transplant cases plus a trauma center and a well-renowned cardiac program. These are all highly weighted types of cases and our overall case mix will be higher than say, Anytown Hospital down the street that has no trauma center, no transplant program, and basic cardiac services (they transfer all their serious cardiac cases to Happyville!). Happyville's case mix will be higher than Anytown's.

As a coder, you don't need to know what your specific hospital's case mix is at any given time. But knowing what impacts case mix is an indication that you know your stuff. First and foremost, case mix fluctuates. Most hospitals monitor case mix on a monthly basis because changes in case mix are a precursor to changes in reimbursement. Of course your CFO wants case mix to continue to rise, but that could be a red flag. And he certainly doesn't want case mix to fall. If case mix begins to decrease, the first place hospital administration usually looks is coding - after all, case mix is based on DRGs, which are based on codes. But there are lots of things that can impact case mix and many of them have nothing to do with coding, such as:
  • The addition or removal of a heavy admitting physician - especially specialty surgeons
  • Opening or closing a specialty unit
  • Changes in a facility's trauma level designation
  • Movement of cases from the inpatient setting to outpatient, and
  • Anything else that impacts the type of services the hospital provides
Your Life as an Inpatient Coder
As an inpatient coder your job is to make sure you get all the codes on the claim in the correct order so that the accurate DRG is assigned and the hospital gets paid appropriately. When I put it that way, it sounds so easy! The reality is, with more and more patients being treated as outpatients, those who are admitted as inpatients are sicker than they've ever been. And sicker means harder to code. For instance, the patient comes in with shortness of breath and the final diagnosis is acute exacerbation of COPD, stapholococcal pneumonia, and respiratory failure. How you code and sequence the case will determine the appropriate DRG and reimbursement. The good news is, you'll have an encoder to help you model the DRGs and see what pays what. The bad news is, you have to paw through the medical record to determine the true underlying cause of that shortness of breath.

So are you ready for the challenge? Are you ready to apply DRGs?

2014 Summer

by Benjamin P. Rosenbaum, MD; Robert R. Lorenz, MD, MBA; Ralph B. Luther, MBA; Lisa Knowles-Ward, RHIT, CCS; Dianne L. Kelly, RN; and Robert J. Weil, MD

Abstract

Documentation of the care delivered to hospitalized patients is a ubiquitous and important aspect of medical care. The majority of references to documentation and coding are based on the Centers for Medicare and Medicaid Services (CMS) Medicare Severity Diagnosis Related Group (MS-DRG) inpatient prospective payment system (IPPS). We educated the members of a clinical care team in a single department (neurosurgery) at our hospital. We measured subsequent documentation improvements in a simple, meaningful, and reproducible fashion. We created a new metric to measure documentation, termed the “normalized case mix index,” that allows comparison of hospitalizations across multiple unrelated MS-DRG groups. Compared to one year earlier, the traditional case mix index, normalized case mix index, severity of illness, and risk of mortality increased one year after the educational intervention. We encourage other organizations to implement and systematically monitor documentation improvement efforts when attempting to determine the accuracy and quality of documentation achieved.

Keywords: technical documentation; case mix index; severity of illness; risk of mortality

Introduction

Documentation is an important aspect of medical care. In addition to clinical communication, documentation is coded to provide data that support quality metrics, acuity of care, billing, and accurate representation of medical conditions. Many clinicians are not well versed in the system by which acuity of patient care and inpatient technical billing (that is, nonprofessional services) are determined. After healthcare organizations codify documentation, payers most often reimburse for services on the basis of the Centers for Medicare and Medicaid Services (CMS) Medicare Severity Diagnosis Related Group (MS-DRG) inpatient prospective payment system (IPPS). Our project focused on the MS-DRG system.

The MS-DRG system classifies a hospitalization into a base MS-DRG derived from the patient’s principal diagnosis and/or principal procedure. Coding professionals identify diagnoses and procedures after reviewing clinicians’ documentation of patient care, typically after a patient is discharged. Currently, the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) ontology is used in the United States to map documentation to diagnosis and procedure codes reported to quality organizations and payers. Each MS-DRG has a number of parameters: medical or surgical type, relative weight, geometric mean length of stay (LOS), and arithmetic mean LOS. Most often (with some exceptions), MS-DRGs belong to a related group consisting of the base MS-DRG, the base MS-DRG plus complication or comorbidity (CC), or the base MS-DRG plus major complication or comorbidity (MCC). A patient’s hospitalization is classified into the base MS-DRG and changed to CC or MCC if a qualifying secondary diagnosis is present. In addition, the MS-DRG relative weight and LOS for CC and MCC assignments are correspondingly higher than the base assignment. Technical reimbursement is determined by multiplying the MS-DRG relative weight by a conversion factor unique to each hospital.

The CMS MS-DRG system is the most widely used technical inpatient billing and reimbursement standard in the United States. Such systems have evolved over time, with annual updates (the 2013 MS-DRG was version 30). An analogous system also used is the All Patient Refined Diagnosis Related Group (APR-DRG) system, created by 3M (3M Health Information Systems, Salt Lake City, Utah). Associated with 3M APR-DRGs are measures of severity of illness (SOI) and risk of mortality (ROM), both classified into nominal, not ordinal, subclasses one through four.

Because many clinicians are not aware of the foundations and permutations of the MS-DRG system, gaps may exist in reporting quality metrics, acuity of care, and even reimbursement for the medically indicated care that was delivered. Focusing solely on revenue capture or improving case mix index (CMI) has been discussed, but caution is recommended when considering such a singular goal.1–5 Hicks and Gentleman described a clinical documentation management program utilizing nurses trained as clinical documentation consultants that resulted in improved clinical documentation in the medical record.6 Similarly, Cleveland Clinic created a successful clinical documentation improvement (CDI) department in 2002, utilizing nurses, physicians, and health information management professionals who clarify uncertainties and assist clinicians to improve documentation accuracy while patients are still hospitalized.

In addition to CDI efforts, educating clinicians is an important step for an organization to enhance and ensure useful and appropriate documentation of the medically indicated care that is delivered. Slaughter and Willner demonstrated that analyzing documentation patterns improved documentation and created dialogue between coding and clinical teams.7 Others showed that bringing clinicians and coding professionals together resulted in a reduction of complication rates.8 We present a project focused on educating a department of clinicians (neurosurgery) at our organization in the MS-DRG system through a joint effort with physician champions, CDI staff, and coding professionals. We describe the measurement of subsequent documentation improvements in a simple, meaningful, and reproducible fashion.

Methods

Identifying Opportunity

As part of a quality-improvement process designed to enhance clinical documentation, we used CareFX (Harris Healthcare Solutions, Scottsdale, Arizona) software to identify departments within our organization with the highest opportunity to improve documentation compared to national benchmarks based on self-reported, anonymized data shared with the University HealthSystem Consortium (UHC).9 We identified MS-DRGs that showed a large discrepancy between current documentation and UHC national benchmarks.

At our institution, the neurosurgical service demonstrated a potential gap between the large number of critically ill patients treated and the national benchmarks for case mix seen at peer institutions in the UHC cohort. We identified 14 potential groups of MS-DRGs that encompassed the majority of patients treated on the neurosurgical service (see Table 1) and reviewed the distribution of these cases as a whole and individually within each group of MS-DRGs. Within each group of MS-DRGs, we identified common diagnoses that qualified for classification as CC or MCC. In conjunction with the CDI and coding teams, we also identified diagnoses that affect the APR-DRG SOI and ROM. Collectively, we identified the conditions and diagnoses that were most commonly encountered in neurosurgical patients and were important to recognize and treat, when necessary, so as to enhance patient care.

CareFX served primarily to identify the service lines (such as neurosurgery) where the greatest documentation improvement opportunity existed to help prioritize efforts. All service lines could benefit from a documentation improvement project, however. At other institutions, the service lines may be targeted based on local knowledge (for example, CDI experience), departmental interest, and resource availability. Physician champions (BPR and RJW), the CDI staff, and coding teams determined the diagnoses targeted for improvement by analyzing the diagnoses most commonly encountered in each group of MS-DRGs at our institution historically.

Educating Clinicians

Next, we conducted educational sessions, led by physician champions (BPR and RJW) within the neurosurgery department to inform clinicians about the MS-DRG system. The goal was to provide tools for clinicians to focus their attention on relevant, common, and important conditions that affect the care and outcomes of neurosurgical patients—information that is important to share with all members of the medical team. Each individual received a pocket card, shown by others to improve knowledge,10 to serve as an ongoing documentation improvement reference and reminder at the end of the session. No changes were made within the organization’s electronic health record (EHR), which is a fully functional EHR (Epic MyPractice, Epic Systems Corporation, Verona, Wisconsin), or to EHR functionality. All changes in documentation were a result of active engagement by clinicians. Clinicians included attending physicians, fellows, resident physicians, physician assistants (PAs), and nurse practitioners (NPs). As at most institutions, our institution approaches patient care in a team-based fashion, and multiple individuals interact with patients and document care in the EHR.

Normalized Case Mix Index

Traditionally, organizations analyze MS-DRG data by comparing the mean MS-DRG weight, termed the CMI, over time. Unfortunately, however, CMI is a less than ideal method to determine whether documentation efforts improve. Because MS-DRG weights are relative across medical and surgical MS-DRG assignments, a higher mix of medical (nonsurgical) patients can negatively influence the CMI, even with improved documentation within each group. As a result, we propose and discuss a new metric, the normalized CMI, for comparison across unrelated MS-DRGs.

The normalized MS-DRG weight (or normalized CMI) corresponds to the typical doublet or triplet of related MS-DRGs. Normalizing the weight and comparing the mean (that is, the normalized CMI) allows for comparison of dissimilar MS-DRGs on a related basis. CMI may be otherwise difficult to compare for dissimilar MS-DRGs, particularly when determining if documentation changed at the onset of a targeted improvement project.

The normalized MS-DRG weight is determined by taking the difference between the actual MS-DRG weight and the minimum MS-DRG weight and dividing by the maximum minus the minimum MS-DRG weight in the related group as shown in Equation 1. The minimum and maximum values are found within the related MS-DRG doublet or triplet.

Table 2 illustrates the normalized weight for two MS-DRG groups. Note that the normalized weight will always be 100 for the base plus MCC MS-DRG and 0 for the base MS-DRG. After determining individual MS-DRG normalized weight, analogous to calculating the traditional CMI, the normalized CMI is then computed as the mean normalized MS-DRG weight across multiple hospitalizations of interest. The resultant normalized CMI helps identify, regardless of patient mix (medical or surgical), whether the documentation itself improves.

Measuring Progress

Using organizational billing data, we measured the effects of educating the clinicians within the neurosurgery department. The data contain patient demographics, procedures, diagnoses, and acuity of care details (LOS, MS-DRG, MS-DRG weight, SOI, ROM). Such data are common to all healthcare organizations. We created a web-based inpatient hospitalization documentation analysis tool to help analyze historical and recent trends in documentation. The tool fills a significant gap by enabling the analysis of whether documentation improvement efforts are successful from a variety of measures (such as quality, acuity, and reimbursement).

The tool is password protected and was developed (by BPR) using open-source software (PHP and MySQL). Twice monthly (or however frequently desired), discharge-level financial data (billed data) are downloaded from a central financial application (Allscripts EPSi, Chicago, Illinois) and loaded into the tool. Annually, updates to the MS-DRG definitions are incorporated into the tool. Discharge-level data are analyzed for custom-defined groups of physicians (the clinicians targeted for documentation improvement) after a patient is discharged. The tool provides a retrospective review of recent hospitalizations after discharge to understand how well documentation was performed and coded. Time from discharge to inclusion in the analysis is as low as three days. As a result, we provided timely feedback directly to the clinicians to improve future clinical care delivery. The software was also implemented throughout the organization beyond the neurosurgery department to validate its functionality. Statistical reporting was validated using JMP Pro 10.0.2 (SAS Institute, Inc., Cary, North Carolina).

We met frequently with the CDI and coding departments within the organization to clarify gaps in our knowledge of the MS-DRG coding process and possible discrepancies in the diagnoses documented and those coded in the final bill. As a result, we jointly identified several process issues.

The data obtained and presented in this article represent the situation one year after the educational initiative compared to one year before it. All data were obtained retrospectively with institutional review board (IRB) approval. Because of the sensitive nature of the data, only relative changes are presented as determined by statistical significance (increase or decrease) or lack thereof.

Statistical Methods

Statistical analyses were conducted using JMP Pro 10.0.2. A p-value of .05 was used to determine statistical significance. Analyses conducted included Student’s t-test (for mean values) and chi-square testing (for categorical values). Variables independently analyzed included CMI, normalized CMI, patient age, LOS, number of ICD-9-CM diagnosis codes, MS-DRG assignment, APR-DRG ROM and SOI, and payer type. Assuming variable independence, in addition to unadjusted independent Student’s t-test and chi-square testing, we employed a conservative adjustment for multiple comparisons using the Holm-Bonferroni sequential correction procedure, stopping if adjusted p-values became greater than .05.11, 12

Results

The neurosurgery department was the primary discharging service for 3,323 inpatients in the year before the educational intervention and 3,334 patients in the year after the educational intervention. Table 3 illustrates the relative changes of various relevant documentation metrics before and after the educational intervention. After one year, the following mean values increased as indicated by unadjusted p-values: CMI, normalized CMI, MS-DRG, expected LOS (arithmetic and geometric), and number of ICD-9-CM diagnosis codes, whereas the following mean values remained unchanged: patient age and actual LOS. In addition, after one year, the following percentages increased as indicated by unadjusted p-values: CC or MCC, MCC, APR-DRG ROM (3 or 4), APR-DRG SOI (3 and 3 or 4). The following percentages remained unchanged: medical MS-DRG, APR-DRG ROM of 4, and Medicare or Medicaid payer type. No values decreased.

The number of diagnosis codes evaluated was based on institutional internal data. The maximum number of associated codes for the hospitalizations analyzed was 52 (median 8, mode 5, mean 9.5, and standard deviation 6.5).

Discussion

The calculated metrics (see Table 3) illustrate that the documentation improvement project and evaluation tool were successful. First, we demonstrated that important measures of patient acuity (CMI, normalized CMI, SOI, and ROM) improved, indicating that documentation after the educational intervention more accurately reflected the patient’s medical condition. In such a retrospective analysis, it is possible that the “before” and “after” patient cohorts differ in their fundamental makeup, resulting in bias. Such bias could lead to erroneously ascribing improvement in acuity to the documentation improvement effort. In our case, we found no change in actual LOS, patient age, medical MS-DRGs, neurosurgery MS-DRGs (see Table 1 for definitions), or payer. In addition, the large patient volume (3,323 before; 3,334 after) helped to eliminate such a bias. As a result, we interpret that the patient cohorts are largely identical and that the improvements seen were due to the documentation improvement effort alone.

Another confirmation, we believe, of documentation improvement was that SOI assignments of 4 and 3 or 4 both increased (unadjusted p-values .034 and .0089, respectively), whereas only ROM of 3 or 4 increased (unadjusted p-value < .0001), as seen in Table 3. ROM of 4 (the highest nominal assignment) remained unchanged after the educational intervention. Given the aforementioned reasons for our belief that the fundamental makeup of each patient cohort (before and after) did not change, we would expect that the overall mortality incidence and antecedent risk would also remain unchanged. As a result, the data corroborate the intuition that such a documentation improvement effort would be and was more effective at accurately improving severity markers (that is, acuity of patient condition/care) as opposed to mortality markers.

Aside from evaluating numerical metrics, we learned important lessons that are applicable to other organizations. During the quality improvement project, we reviewed individual patient charts using the tool to identify ongoing areas for improvement. After reviewing individual patient charts and meeting with the CDI and coding departments, we learned that important, meaningful patient data that may be found in common locations in the EHR (for example, body mass index [BMI]) are often overlooked, even though the data represent factors that significantly influence care. For example, a BMI greater than 40 kg/m2 can make a surgical procedure more technically demanding or alter the intensity of nursing care that must be delivered. As a result, we educated the clinicians, CDI, and coding professionals on where to find important patient data elements that affect documentation.

The newly proposed normalized MS-DRG weight (normalized CMI) is a valuable measure to identify whether targeted documentation improvement efforts are effective. In our case, both the CMI and normalized CMI improved (unadjusted p-values .0074 and < .0001, respectively), with the degree of statistical significance reflecting the certainty of the improvement. When the project was first underway (in the first several weeks and months), no or minimal statistically significant improvement in CMI was seen; however, the normalized CMI did show statistically significant improvement. In addition, we noted that during certain periods (two-week intervals) when the CMI decreased compared to historical values, the normalized CMI remained high, which indicated continued improved documentation despite the change in patient mix. Other departments or physician cohorts have a more varied mix of medical and surgical MS-DRG patient populations. Such populations’ relative weight assignment would be more closely distributed, making the CMI a less reliable metric of improvement. The normalized CMI can be used to confirm documentation improvement. Normalized CMI, particularly at the onset of the project, validated our efforts to educate clinicians who, in turn, improved patient documentation.

In identifying neurosurgery as an area of opportunity, we knew that a small number of diagnoses were frequently overlooked or obscure to clinicians. The electronic tool we created allowed us to specifically analyze selected MS-DRGs according to normalized CMI and the myriad metrics (see Table 3) to demonstrate that documentation improved. Thus, we propose that quality education focusing on relevant, common misses is more important than quantity of education. In addition, we found it most helpful to educate and interact with the residents, NPs, and PAs who, at our institution, do the majority of detailed inpatient documentation.

Furthermore, our interaction with the departments responsible for coding patient documentation was invaluable. Both the project team and the coding teams learned important information. In these interactions, we identified areas in which to clarify important clinical diagnoses and scenarios for the coding team. In addition, the clinicians learned about the intricacies of technical coding. The CDI team learned more about the clinical workflow and improved ways of communicating with clinicians to clarify ambiguous or missing elements within the documentation. In addition, we jointly created a formal institutional definition of disease states qualifying for a relatively ambiguous ICD-9-CM diagnosis code.

Beyond education of the clinical documenters, we found that the process of measuring the changes in documentation was an important step to validate the process improvement. Educating clinicians is important, but so too is understanding if such efforts resulted in any changes. Finally, we did not observe any negative effects on quality metrics during the same time interval, which was a theoretical concern of improving the number and type of documented diagnoses.

Conclusions

Clinical documentation is the cornerstone of recording a patient’s current condition, relevant diagnoses, and communication between clinicians. As healthcare data analysis, quality metrics, and reimbursement continue to advance, knowledge of important documentation concepts must also advance. Patient complications may be judged against a patient’s assigned MS-DRG, SOI, or ROM, and incomplete or ambiguous documentation or coding efforts may render such judgments difficult to decipher. Understanding and improving clinical documentation to reflect a patient’s medical condition accurately and precisely has the potential to affect all such metrics. Proper and complete documentation also has important implications for the utility and reproducibility of data in large multi-institutional data sets such as the Medicare Provider Analysis and Review (MEDPAR) File, American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP), Nationwide Inpatient Sample (NIS), and UHC data that are used both for clinical research and to determine local, state, and national healthcare policy.13–16

The clinicians (BPR, RRL, RJW) benefited significantly by working directly with the coding and documentation improvement professionals (LKW, DLK), who provided valuable insights into how to enhance documentation in ways that improved communication in the medical record. Although the coding system is complex, the basic understanding that documentation must be accurate and thorough is simple to convey to clinicians. Education on the overall importance provides incentive to improve efforts given that there is no necessary, direct patient-by-patient penalty for poor documentation. Feedback about ongoing progress and improvement also provides incentive to continue.

Further efforts at our organization and others may focus on implementing some of the principles of MS-DRG assignment within the EHR to help clinicians effectively document relevant, common diagnoses that have an impact on overall metrics. For example, Grogan et al. demonstrated improvements in documentation with a progress note template in 2004.17 It is important that such efforts not focus solely on improving financial reimbursement. Most important is the need to ensure accurate clinical documentation that represents the patient’s level of acuity and clinical condition effectively.

 

Benjamin P. Rosenbaum, MD, is a neurosurgery resident at Cleveland Clinic in Cleveland, OH.

Robert R. Lorenz, MD, MBA, FACS, is an otolaryngologist and medical director of payment reform, risk, and contracting at Cleveland Clinic in Cleveland, OH.

Ralph B. Luther, MBA, is a process improvement specialist at Cleveland Clinic in Cleveland, OH.

Lisa Knowles-Ward, RHIT, CCS, is director of coding and reimbursement at Cleveland Clinic in Cleveland, OH.

Dianne L. Kelly, RN, is director of clinical documentation improvement at Cleveland Clinic in Cleveland, OH.

Robert J. Weil, MD, MBA, FACS, is a neurosurgeon who previously worked at Cleveland Clinic in Cleveland, OH. He now works at Geisinger Health System in Danville, PA, as a neurosurgeon, chief medical executive for the northeast region, and associate chief scientific officer.

Notes

1. Barnes, S. L., M. Waterman, D. Macintyre, J. Coughenour, and J. Kessel. “Impact of Standardized Trauma Documentation to the Hospital’s Bottom Line.” Surgery 148, no. 4 (2010): 793–97, discussion 797–98.

2. Mendez, C. M., D. W. Harrington, P. Christenson, and B. Spellberg. “Impact of Hospital Variables on Case Mix Index as a Marker of Disease Severity.” Population Health Management 17, no. 1 (2014): 28–34.

3. Mookherjee, S., A. R. Vidyarthi, S. R. Ranji, J. Maselli, R. M. Wachter, and R. B. Baron. “Potential Unintended Consequences Due to Medicare’s ‘No Pay for Errors’ Rule? A Randomized Controlled Trial of an Educational Intervention with Internal Medicine Residents.” Journal of General Internal Medicine 25, no. 10 (2010): 1097–1101.

4. Richter, E., A. Shelton, and Y. Yu. “Best Practices for Improving Revenue Capture through Documentation.” Healthcare Financial Management 61, no. 6 (2007): 44–47.

5. Steinwald, B., and L. A. Dummit. “Hospital Case-Mix Change: Sicker Patients or DRG Creep?” Health Affairs 8, no. 2 (1989): 35–47.

6. Hicks, T. A., and C. A. Gentleman. “Improving Physician Documentation through a Clinical Documentation Management Program.” Nursing Administration Quarterly 27, no. 4 (2003): 285–89.

7. Slaughter, J., and S. Willner. “A Successful Methodology for Improving the Quality of Clinical Data.” Journal of AHIMA 67, no. 10 (1996): 46–48, 50–41.

8. “Involve Physicians and Coders in Measuring and Improving Clinical Outcomes.” Hospital Peer Review 25, no. 2 (2000): 17–21.

9. UHC. “University HealthSystem Consortium.” Available at http://www.uhc.edu.

10. Mikhael, J., L. Baker, and J. Downar. “Using a Pocket Card to Improve End-of-Life Care on Internal Medicine Clinical Teaching Units: A Cluster-Randomized Controlled Trial.” Journal of General Internal Medicine 23, no. 8 (2008): 1222–27.

11. Abdi, H. Holm’s Sequential Bonferroni Procedure. Thousand Oaks, CA: Sage, 2010.

12. Holm, S. “A Simple Sequentially Rejective Multiple Test Procedure.” Scandinavian Journal of Statistics 6 (1979): 65–70.

13.  Healthcare Cost and Utilization Project (HCUP). “NIS Database Documentation.” December 2013. Available at http://www.hcup-us.ahrq.gov/db/nation/nis/nisdbdocumentation.jsp.

14. Centers for Medicare and Medicaid Services. “Medicare Provider Analysis and Review (MEDPAR) File.” December 2013. Available at http://www.cms.gov/Research-Statistics-Data-and-Systems/Files-for-Order/IdentifiableDataFiles/MedicareProviderAnalysisandReviewFile.html.

15.  American College of Surgeons. “ACS NSQIP: Surgical Quality Improvement.” Available at http://site.acsnsqip.org/.

16. UHC. “University HealthSystem Consortium.”

17. Grogan E. L., T. Speroff, S. A. Deppen, C. L. Roumie, T. A. Elasy, R. S. Dittus, S. T. Rosenbloom, and M. D. Holzman. “Improving Documentation of Patient Acuity Level Using a Progress Note Template.” Journal of the American College of Surgeons 199, no. 3 (2004): 468–75.

 

Printer friendly version of this article.

Benjamin P. Rosenbaum, MD; Robert R. Lorenz, MD, MBA; Ralph B. Luther, MBA; Lisa Knowles-Ward, RHIT, CCS; Dianne L. Kelly, RN; and Robert J. Weil, MD. “Improving and Measuring Inpatient Documentation of Medical Care within the MS-DRG System: Education, Monitoring, and Normalized Case Mix Index.” Perspectives in Health Information Management (Summer 2014): 1-11.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *