Data acquisition and analysis are not just an IT/IS function. Rather the ED leader should become intimately involved in the review of this critical information. Specific ED data sources and benefits have been highlighted to emphasize the richness of key performance metrics that may be produced. It is recommended that the ED leaders first obtain and analyze the metrics required by CMS and other external agencies before moving to production of other KPIs.10
To guide data analytics' efforts, the ED team should:
- Apply departmental and institutional “management goals” as guides for KPI identification
- Determine the dashboards, reports, and metrics that are currently and easily available from the computer system's reporting tools
ED leaders may mistakenly discount the value of retrospective analytics and their impact on creating direct behavior change. Previous information may be of great importance when developing department goals and translating these goals into KPIs, dashboards, or reports. Certainly, real-time analytics are the most meaningful way for the ED staff to recognize and redirect poor performance and outcomes. However, retrospective analytics may offer insight into the longitudinal state of KPIs allowing identification of trends and longer term interventions necessary to support management goals.
When translating a goal into a KPI, the ED leader should consider the following:
- What are the obvious “pain points” in the ED? What processes do the team members describe as requiring improvement?
- Are the “pain points” measurable?
- In what computer system (EHR, billing, and so on) does this measurable data exist and what is the best method for obtaining it?
- Can vendor standard reports provide needed data on these KPIs?
- Is the data on these standard reports “clean”?
When developing departmental KPIs, a best practice is to take an inventory of standard reports available in each computer system. This will help the ED leaders quickly understand what is currently being measured. The more valuable reports provide actionable evidence to give feedback to staff and to recommend process improvements.
While taking standard report inventory to support KPI production and monitoring, it is important to develop an understanding of the underlying detailed data and the imperfections of the data used to generate the aggregated statistics (mean, sum, median values, etc).
It is important to assess the “cleanliness” of the data reported on standard reports. Too often information provided may not actually measure the intended data. A common example is found when comparing billing, EHR, clerical data on simple-volume metrics, such as overall volume, admissions, patients leaving AMA, patients leaving prior to medical screening examination, and so on. It is surprising how different these simple statistics can be and yet many rely on this information and make critical judgments about their significance.
ED leaders should scrutinize the reported information carefully and work closely with IT staff and IT vendors to clarify the logic processes underlying the metrics in standard reports. It is helpful to specifically note data exclusions and filters and to make this information available to those viewing and interpreting KPIs.
An instance of underlying logic assumptions that may produce “unclean” results is noted in the example of patient volume. Questions that should be raised to clarify this data include:
Have “voided patients” been removed from the total count? (Voided patients refer to data entry errors of patient records, ie, patients registered by mistake.)
Have patient “duplicate entries” been removed? (Duplicate patients are those with the same chart number listed twice.)
Have patient “direct admits” been removed? (These patients may not have been provided ED services.)
Are patients who triage/register and leave prior to medical screening examination included in total count statistics?
As another example, when reviewing the “door-to-provider” times, it is important to understand how this metric's cleanliness is influenced by data entry methodologies. Questions that may be asked to clarify this data include:
How are arrival times being obtained?
How is a provider defined (physician, MLP, and/or resident)?
What interaction (triage nurse, greeter, kiosk, sign-in sheet) with what computer system creates the time stamp being used?
What happens when the time increment is calculated as a negative number?
The translation of management goals into KPIs and then into data to support these goals may reflect a variable degree of inaccuracy due to human data entry error (eg, the physician or nurse forgot to log an accurate room time). While at first, these inaccuracies may be discouraging, in fact they are routine and afford an opportunity to fine tune the system to provide accurate and trusted data. High-performance ED leaders educate staff so that they understand the importance of accurate data as it will allow the team to assess their performance and continually improve.
The ED leader should not be surprised to find that the initial standard report inventories developed by the IT staff often do not fully satisfy the immediate KPI requirements.
- For example, since the hospital purchased an EHR, a large ED expansion occurred. The new management goals could require separate KPIs between fast track and several other ED subunits.
- Another example of a changing service requiring additional measurements could occur if the hospital expands the ED psychiatric services. This new service would require differentiation of service data between mental health and nonmental health patients.
With changing needs and services, the standard reports provided during implementation of the EHR are likely to require modifications, add-ons, patches, or even new software programs. The ED leader should not become discouraged by the dynamic nature of information gathering and associated analytics. Options (discussed later) are available to address “new” analytic requirements not supported by the original vendor software.
Finally, as the old adage “garbage in garbage out” suggests, the integrity of the entered data determines the usefulness of the results. Therefore, data quality and source computer system usage compliance should become obsessions of ED managers. But, the definition of quality data varies depending on the metric that is being analyzed. The following section reviews the definition and documentation of data quality and the creation of a definition list for the KPIs used in dashboards and other important reports.
In the previous section, the importance of data quality when producing KPIs in support management goals was discussed. This section presents the concept that data definitions are crucial to building departmental/administrative acceptance of the information reports and use. Without acceptance and consensus of data definitions, KPIs will be challenged, discounted, and possibly ignored by all whose behavior the ED leaders wish to change.
A best practice is to define each KPI's primary objective, from what system and/or report the KPI is produced, and all related data logic inclusions and exclusions. It is during this definition exercise that problems with source systems data entry are identified. Data management experts refer to data definition as “metadata.”11
Metadata creation is a data management best practice used to:
- Remove all questions regarding why and what information is collected.
- Determine how data should be collected and stored, its attributes, its desired behavior, and its relationship to other data.
- Dimensional information (items which can be dissected and diced into various components—eg, department, nurse, physician, etc)
- Metric information (items that can be used in mathematical equations—eg, visits, length of stay, charges, etc)
The example in Table 62-2 demonstrates that a single metric's definition can be defined from multiple angles including its format, frequency, and targeted values. Having each KPI defined in this manner eliminates any confusion regarding its use and how its quality is impacted by poor source computer system compliance. Over time, it is common for “Lean” staff to require the production of metadata before CQI metrics are presented at management meetings.
Table 62-2 Sample of Metadata Documentation for the Metric “Patient Arrival to Room” |Favorite Table|Download (.pdf)
Table 62-2 Sample of Metadata Documentation for the Metric “Patient Arrival to Room”
Decrease door-to-room time (arrival, preregistration, and triage)
Source system to isolate data
Data currently available
By day, week, and month
A time in minutes between the patient's arrival time and placement into a treatment care space
Additional ways metric is to be reported
By provider (physician or allied health professional, ED area, nurse, time of day, day of week, acuity level, ICD coding level)
Assumed impact on other key performance metrics
Decreasing this time will positively impact the patients leaving prior to medical screening (LPMSE) as well as patient satisfaction results
Patient exclusions from metrics
Exclude all patients who arrive by ambulance
Due to potential human data entry using the EDIS, eliminate values <0 or >600 minutes. However, report these invalid time increments at management level (and to nurses and physicians) to improve compliance