Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Measuring implementation feasibility of clinical decision support alerts for clinical practice recommendations

Measuring implementation feasibility of clinical decision support alerts for clinical practice... Abstract Objective The study sought to describe key features of clinical concepts and data required to implement clinical practice recommendations as clinical decision support (CDS) tools in electronic health record systems and to identify recommendation features that predict feasibility of implementation. Materials and Methods Using semistructured interviews, CDS implementers and clinician subject matter experts from 7 academic medical centers rated the feasibility of implementing 10 American College of Emergency Physicians Choosing Wisely Recommendations as electronic health record–embedded CDS and estimated the need for additional data collection. Ratings were combined with objective features of the guidelines to develop a predictive model for technical implementation feasibility. Results A linear mixed model showed that the need for new data collection was predictive of lower implementation feasibility. The number of clinical concepts in each recommendation, need for historical data, and ambiguity of clinical concepts were not predictive of implementation feasibility. Conclusions The availability of data and need for additional data collection are essential to assess the feasibility of CDS implementation. Authors of practice recommendations and guidelines can enable organizations to more rapidly assess data availability and feasibility of implementation by including operational definitions for required data. clinical decision support, clinical guidelines, feasibility assessment, implementation INTRODUCTION Clinical practice recommendations and guidelines enable clinical specialists and professional medical societies to disseminate information and best practices for high-value patient care. Most of the 100+ professional societies in the United States create and disseminate practice recommendations with the hope that they will be widely adopted. Adoption can be increased by presenting patient-specific, actionable recommendations from guidelines to providers within the appropriate clinical context using clinical decision support (CDS) interventions (eg, alerts, reminders, InfoButtons, order sets). CDS integrated into electronic health record (EHR) systems has been shown to change provider behavior and improve patient outcomes.1–4 However, the proliferation of certain kinds of CDS such as interruptive alerts has become overwhelming for providers. The growing phenomena of alert fatigue and frustration with EHR systems prompted the expansion of the “Triple Aim” to include a fourth aim focused on provider well-being, including satisfaction with EHR systems and their CDS features.5 Strategies for increasing providers’ acceptance and adherence to CDS include making CDS recommendations more patient-specific (eg, reduce false positives), eliminating the need to enter new data, and following the “Five Rights of CDS” principle (ie, provide the right information, to the right people, in the right formats, through the right channels, at the right points in workflow).6,7 All 3 strategies require a cognizance of what local data is available and whether the data are in a format (structure and content) that can be used by the CDS tool. To understand and assess the extent of alignment between the specific data (input) requirements for a CDS tool and the local EHR data, the CDS logic needs to be (1) adequately explicit so it can be represented in a computable logical format and matched to specific patient and clinical contexts (ie, the 5 Rights of CDS)6 and (2) designed to interoperate with existing EHR functions and clinical data in a structured and standardized format. However, the lack of EHR data and functional standards has made it challenging for CDS authors to make their CDS logic adequately explicit and interoperable for every variation of data schema, formatting, representation, and completeness of data. Consequently, in the absence of explicit and interoperable logic and data requirements, healthcare organizations expend a great deal of time and effort to define operational and computable definitions for clinical concepts in the guidelines.8 This work requires both clinical and technical expertise and is often done de novo at every organization for every new or proposed CDS project. Further, before implementing the CDS, this work is also done to determine feasibility and resources required to integrate the CDS into EHRs. A measure of CDS feasibility from the perspective of local data availability and readiness will enable organizations to estimate the technical effort required to implement a CDS intervention that will function effectively as intended, allowing them to prioritize and direct limited development resources to building CDS itself rather than assessing feasibility. To date, we have found no published literature that characterizes the “feasibility” of CDS from the perspective of data availability and readiness, nor is there a systematic and generalizable method to assess feasibility of CDS implementation. In addition, there is currently no standard way to report how “implementable” a guideline is in a particular EHR system. One aspect of implementation feasibility is the availability of necessary data without additional data collection by providers. The availability of necessary data also implies “data readiness” (ie, that the data are of adequate quality, available when needed, and in a format that can be readily used by the CDS). Implementers will benefit from methods that allow them to review potential CDS interventions and estimate the feasibility of implementing the CDS in their organization based on the availability and readiness of data as collected in their local systems. Such a tool requires an understanding of the key features that influence feasibility. Choosing Wisely (CW) recommendations are widely accepted by providers and are a good candidate for widespread implementation as CDS. The CW initiative has been adopted by 77 professional societies who agree to identify common practices that are not evidence-based, and provide recommendations designed to reduce those practices.9 Healthcare organizations wishing to implement CW recommendations as CDS must assess the feasibility of integrating CDS into their local systems, determine whether the resources required are justified against competing activities, and define an implementation strategy and timeline. Strategies are needed for implementers to quickly assess the feasibility of integrating clinical recommendations as CDS alerts and for authors to make their recommendations more implementable, and thereby accessible, for all healthcare organizations. To address this gap, the objectives of our research were to (1) describe key features of clinical concepts and data required to implement CW recommendations as CDS; (2) assess the feasibility, data availability, and requirements for additional data collection; and (3) identify features useful for predicting feasibility of implementing automated CDS for CW recommendations in EHR systems. MATERIALS AND METHODS Guideline selection We selected 10 CW recommendations. Among the 10 American College of Emergency Physicians (ACEP) CW recommendations, we selected 9 that were evidence-based recommendations of procedures to question or avoid for certain types of patients in the emergency department (ED).10 One recommendation (ACEP CW #3, refer to palliative care where appropriate) represents a value and policy statement rather than an action, and was replaced with a recommendation developed by the American College of Radiology (imaging approach for suspected appendicitis) relevant to ED practice (Table 1). Table 1. CW recommendations (2017) selected for analysis CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely. a Source: http://www.choosingwisely.org/societies/american-college-%20of-emergency-physicians/. b Source: http://www.choosingwisely.org/clinician-lists/american-college-radiology-ct-to-evaluate-appendicitis-in-children/. Open in new tab Table 1. CW recommendations (2017) selected for analysis CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely. a Source: http://www.choosingwisely.org/societies/american-college-%20of-emergency-physicians/. b Source: http://www.choosingwisely.org/clinician-lists/american-college-radiology-ct-to-evaluate-appendicitis-in-children/. Open in new tab Preparation of CW recommendations (translation to semistructured logic format) The 10 recommendations were transformed into a semistructured11 logic format using a portion of the Shiffman et al12 methodology for (automated) transitioning clinical guidelines into CDS, with clarification by Tso et al.13 A masters’ trained board-certified nurse informaticist (B.D.) reviewed each recommendation and applied the following steps: atomize (extract and refine discreet concepts from narrative recommendations), deabstract (adjust level of generality for decision variable or action to enable operationalization), and disambiguate (establish a single semantic interpretation for a recommendation statement). Subsequent atomization and deabstraction steps were iterative, referencing the literature supporting the recommendations as needed to clarify any vague concepts.12 For the disambiguation step, we reviewed each recommendation to identify detailed criteria or operational definitions for clinical concepts. As we came across clinical concepts that needed disambiguation, we reviewed all primary supporting references specifically mentioned in the recommendation. If we could find more details or an operational guideline from these references, we provided this information to reviewers in our structured interviews. We did not review references cited by the references. However, if a primary reference was a guideline or recommendation that had been updated since the CW recommendation was published, then we used the most recent version of the referenced guideline. In cases in which we found multiple operational definitions for a concept (arising from multiple references), we chose the criteria (ie, prioritized the reference) from a national guideline endorsed by ACEP or other trusted organization, such as the U.S. Centers for Disease Control and Prevention. We tried to stay as close to the recommendation as possible and did not define concepts beyond definitions provided in the recommendation and cited references. For example, we preserved the term suspected appendicitis (ACR CW #1), rather than list out the features of suspected appendicitis (eg, pain in lower abdomen, fever). We did attempt to operationalize concepts using reasonable exclusions derived from the CW recommendation. For example, for “mild, uncomplicated abscess” (ACEP CW #4), we asserted an exclusion for a severe abscess in our structured CDS logic. A spreadsheet of the modified logic was reviewed by 2 CDS experts (G.D.F. and C.S.) to ascertain consistency in method and level of abstraction. Further, a clinical domain expert (T.T.) reviewed logic, operational definitions, and supporting references. The final logic for each CW recommendation was organized into a consistent semistructured format for interviews with CDS implementers (Figure 1). Figure 1. Open in new tabDownload slide Example of Choosing Wisely recommendation logic in semistructured format organized for expert review (see Supplementary Appendix for all 10 guidelines). ACEP: American College of Emergency Physicians; CT: computed tomography. Figure 1. Open in new tabDownload slide Example of Choosing Wisely recommendation logic in semistructured format organized for expert review (see Supplementary Appendix for all 10 guidelines). ACEP: American College of Emergency Physicians; CT: computed tomography. Study site selection We selected a convenience sample of 7 academic medical centers, including those with which we were affiliated or had professional contacts. Five of the centers used the Epic EHR system (Epic Systems, Verona, WI) and 2 centers used the Cerner EHR system (Cerner, Kansas City, MO). Study participants We conducted a semistructured interview with dyads consisting of a system analyst (CDS implementer) and a clinician from each of the 7 centers between May and August 2018. The CDS implementers were required to have considerable experience with CDS implementation and familiarity with their institution’s EHR system. Clinicians were all physicians that worked primarily in emergency medicine or in an urgent care setting and had experience implementing CDS as clinical subject matter experts. Data collection Procedures for semistructured interviews Each semistructured interview was 1-1.5 hours in duration and was conducted using Web meeting software (WebEx; Cisco, San Jose, CA). Each interview had a moderator (R.R.) who used a script and standard set of slides presenting semistructured guideline logic and interview questions (see Supplementary Appendix). We presented one guideline on the screen at a time, in both narrative and structured logic formats. Each CDS implementer was asked to think-aloud about their reasoning for rating the implementation feasibility and data availability (described in the following section) and encouraged to ask the clinical expert questions regarding documentation practices, data quality, and clinical concepts. A moderator (R.R.) solicited ratings for each CW recommendation using Likert-type scale questions and facilitated discussion using open-ended questions. The interviews were transcribed and notes collected as described inDouthit and Richesson.14 The clinicians and CDS implementers were each compensated for participating in the interview. Measures Two Likert-type scale questions (Figure 2) were directed to the CDS implementer to rate the feasibility and data availability of each of the 10 recommendations. The CDS implementer entered his or her ratings into an online (REDCap [Research Electronic Data Capture])15 questionnaire during the interview. Figure 2. Open in new tabDownload slide Questions asked during semistructured interviews. BPA: best practice alert; EHR: electronic health record. Figure 2. Open in new tabDownload slide Questions asked during semistructured interviews. BPA: best practice alert; EHR: electronic health record. Characterizing features of concepts in the recommendations To quantify features of the sampled CW recommendations, we counted the number of clinical concepts in each guideline, assessed whether each clinical concept required historical data, and assessed the ambiguity (ie, clarity of definition) of each concept. We used 2 independent reviewers (R.R. and B.D.) and a 3-point Likert-type scale (0 = no definition, cannot be operationalized; 1 = not operationalized but could be with effort; 2 = clearly defined and already operational). Discordant concepts were discussed and given a consensus score. Data analysis We used linear mixed models, which allow for the analysis of hierarchically organized data,16 to examine relationships between the reported feasibility ratings and need for additional data, and other characteristics (eg, number of clinical concepts, need for historical data, use of ambiguous concepts) of the recommendations. This was done because feasibility ratings for each CW recommendation were assessed by multiple (n = 7) raters at their respective sites. For our analyses, we nested the feasibility ratings within CW recommendations, which allowed us to examine which recommendation characteristics were associated with feasibility, while also considering variability in feasibility between sites. To assess the proportion of variability in feasibility ratings due to differences between CW recommendations, and the correlation between feasibility ratings within each CW recommendation, an intraclass correlation was computed using a random effects analysis of variance model, which includes a random intercept only. Two models were developed for our main analysis examining relationships between feasibility ratings, need for additional data, and other recommendation characteristics. Our first model tested the need for additional data (by physicians at point of care) as a predictor of the feasibility rating. In a second model, other characteristics of the CW recommendation were added, including number of clinical concepts, proportion of concepts that used historical data, and proportion of concepts that were unambiguous. In both models, the random intercept was retained, along with a fixed effect for site number, to test for differences in feasibility ratings between sites. The study was approved by the Institutional Review Board of Duke University Health System (Pro00076602). RESULTS Features of sampled recommendations The 10 CW recommendations reference a total of 86 concepts (median 8 [range, 5-13] concepts per recommendation) (Table 2). Several concepts (eg, age, order for antibiotic) were used by more than 1 recommendation; therefore, a total of 73 unique concepts are required for implementing CDS based on the 10 CW recommendations. All of the recommendations have a high proportion of concepts that were determined to be ambiguous (ranging from 42% to 100%). Some concepts were considered ambiguous because they lacked operational definitions (eg, “fever,” “immunocompromised,” “severe/progressive neurologic deficits”). Other concepts were ambiguous because the concept is subjective or difficult to define consistently across providers (eg, “otherwise healthy,” “dangerous mechanism of injury,” “required immobilization for trauma or surgery”). In contrast, only 4 recommendations include concepts that related to historical data (eg, “recent spinal injection,” “persistent illness (>= 10 days),” “history of kidney stones”), and the proportion of concepts with this feature represent only 13%-50% of the concepts for the individual recommendation. Table 2. Description of the concepts included in the 10 CW recommendations CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; IV: intravenous. Open in new tab Table 2. Description of the concepts included in the 10 CW recommendations CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; IV: intravenous. Open in new tab Interviews with CDS implementers and clinical experts We conducted 7 semistructured interviews with dyads of CDS implementers and clinicians from 7 different sites. CDS implementers had substantial informatics and EHRs experience (see Supplementary Table 1). Clinicians were all physicians that worked primarily in emergency medicine (n = 6) or in a primary care and urgent care setting (n = 1) and had substantial experience supporting CDS implementation at their institution. Feasibility, data availability, and additional data collection requirements The scores and ranges for implementation feasibility and additional data collection are presented in Table 3. Of the 10 CW recommendations, feasibility scores ranged from 2 to 4 of 5 (mean of median scores for each guideline = 3.3), and the median need for additional data collection ranged from 2 to 3 (mean of median scores for each recommendation = 2.7). Table 3. Reported scores (median and range) for need for additional data collection and feasibility from the sampled sites (N = 7) CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb Yellow rectangles indicate range (minimum and maximum) and red diamonds indicate median. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; PE: pulmonary embolism. a 1 = no data collection to 5 = prohibitive data collection. b 1 = much easier to 5 = much more difficult. Open in new tab Table 3. Reported scores (median and range) for need for additional data collection and feasibility from the sampled sites (N = 7) CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb Yellow rectangles indicate range (minimum and maximum) and red diamonds indicate median. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; PE: pulmonary embolism. a 1 = no data collection to 5 = prohibitive data collection. b 1 = much easier to 5 = much more difficult. Open in new tab Factors influencing feasibility of ACEP CW recommendations Findings from preliminary random effects analysis of variance found an intraclass correlation of .35, which indicates that 35% of the variability in feasibility ratings was due to differences between CWs. Results of linear mixed models examining relationships between feasibility ratings, need for additional data, and other CW recommendation characteristics are presented in Table 4. Results for our initial model indicate that lower scores on need for additional data are related to lower scores on feasibility (B = 0.45, t = 4.87, P < .001); that is, less need for additional data was related to greater feasibility. Feasibility ratings did not differ by site (F6,53 = 2.04, P = .08). In the second model, which included other CW recommendation characteristics, the need for additional data remained significantly related to feasibility (B = 0.43, t = 4.64, P < .001). Feasibility ratings remained unrelated to site (F6,53 = 2.02, P = .08). Feasibility ratings were also unrelated to the number of concepts in the recommendation (B = 0.12, t = 1.51, P = .14), or the proportion concepts that were ambiguous (B = 0.84, t = 0.77, P = .44) or historical (B = 0.28, t = 0.25, P = .80). Table 4. CDS feasibility ratings on need for additional data and CDS characteristics . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 CDS: clinical decision support. Open in new tab Table 4. CDS feasibility ratings on need for additional data and CDS characteristics . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 CDS: clinical decision support. Open in new tab DISCUSSION To our knowledge, this is the first study to quantify the association between feasibility of CDS implementation and features of clinical guidelines. Our linear mixed models show that the need for new data collection was predictive of lower implementation feasibility, while the number of clinical concepts in each recommendation, need for historical data, and ambiguity of clinical concepts were not predictive of implementation feasibility. Our findings suggest that the need for additional data collection is an essential factor in the technical feasibility of proposed CDS tools. The CDS implementers that we interviewed reported the feasibility of implementing the CW recommendations as generally low, entailing some level of difficulty for all 7 sampled sites. Further, all the recommendations we sampled required at least some additional data entry by providers. While additional data collection was required to implement the recommendations and data collection impacted feasibility, our data suggests that interviewees did not, in general, perceive the additional data collection to be prohibitive. In our model,1 lower need for additional data predicts greater implementation feasibility. Although our model is simple and intuitive, it does show that availability of existing data, rather than complexity of data requirements, is the strongest predictor of CDS implementation feasibility. Our results provide a hypothesis that, ideally, would be assessed empirically in future studies. Nearly two-thirds of the variation in feasibility remains unexplained by our model. We are currently analyzing the comments from our interviews to identify other factors that impact the feasibility of implementing CDS tools and will report this in a future article. Our study has limitations that may impact generalizability. Our sample was limited to 10 CW recommendations that are focused on the ED setting and preventing errors of commission (ie, they are all about what not to do). Our feasibility assessment approach should be repeated for other settings and for other kinds of guidelines, such as those focused on preventing errors of omission (ie, reminders to perform certain procedures). In addition, our study was limited to organizations using only 2 vendor systems focused on the tertiary care market (ie, Epic and Cerner); however, these 2 systems are the 2 most prevalent EHR systems and represent a large market share. Because we conducted our investigation using 7 academic medical centers and prevailing EHR systems, our results are likely to generalize to similar centers using the same products. However, our results may not generalize to nonacademic medical centers using other EHR products and most likely do not generalize to less resourced settings, such as independent community practices and safety net clinics. Finally, we did not assess the reliability and validity of our Likert-type scale questions, and do not know the extent to which the clinical experts’ and CDS implementers’ responses accurately reflect the actual data or system readiness of their organizations. Despite these limitations, our approach has 2 important strengths. First, although the lack of sufficiently structured and detailed data is a well-known barrier for CDS implementation,17–19 this problem is not well quantified. We believe we are the first to quantitatively and systematically assess the relationship between data availability and feasibility of CDS implementation. Second, we preserved the nature of CDS requests that implementers first see by presenting the logic in a state as close as possible to the original recommendations. Although many of the concepts referenced in the sampled recommendations were ambiguous, we did not provide operational definitions (beyond the CW recommendations or supporting references) when preparing the logic for CDS experts to review and rate. This allowed us to use our structured interviews to investigate and quantify the perceived “knowledge engineering” effort required to conceptualize and operationally define ambiguous concepts, which have been a known issue for decades20 and are a significant challenge for CDS planning and implementation.11 We found the feasibility for a sample of CW recommendations to be generally low, suggesting that organizations might have to commit substantial resources to their implementation. Our findings are consistent with a previous assessment of the 2008 ACEP clinical policies, which found that those recommendations were too vague, required additional physician input or knowledge for translation, and when translated would impede clinical workflow because of excessive data entry.21 Authors of CW and other recommendations can ease the burden of implementing recommendations into CDS by providing operational definitions and guidance for potential implementers. In some cases, it may be necessary to ask users to collect additional data that is critical to the logic of the CDS in order to ensure that the intervention functions as intended. However, given the rising frustration around increasing data entry requirements, the cost of any additional data capture for CDS should be heavily considered.22 Organizations can also consider using surrogate data or natural language processing approaches to provide the needed data at lower burden to providers.22 These are all prominent and active issues in clinical informatics and our results are not unexpected. What this work does contribute, however, is a quantification of the relationship between data availability and CDS implementation in a high-priority domain. Currently, assessing feasibility for implementing new CDS is a time and resource-intensive process that is unique to each organization. Our work demonstrates that we can characterize and quantify the features of clinical practice recommendations and use those characteristics to predict feasibility of implementing clinical practice recommendations as CDS tools. Clearly defined CDS data requirements will help implementers assess CDS implementation feasibility and effort, and the use of data representation standards will enable the reuse of tools and possible automation of the feasibility assessment for CDS. Widespread adoption of the U.S. Core Data for Interoperability and other common data elements would enable health systems and EHR vendors to understand the availability of clinical data that matches CDS requirements, leading to faster feasibility assessment and implementation.23 To support this vision, clinical specialty societies can identify and promote standard data elements that will support CDS—and subsequent quality measurement—for emerging recommendations. CONCLUSIONS As is, the CW recommendations we examined require significant work for feasibility assessment and implementation. A critical determinant of guideline implementation feasibility is the availability of existing data that match requirements of the CDS, averting the need for additional data entry. Guideline authors can reduce the burden of assessing a recommendation’s readiness for CDS by including operational definitions for guideline logic, ideally mapped to reference data standards. The adoption of standard clinical data elements in EHR systems and guideline logic can support the automated assessment of a guideline’s readiness for CDS within the EHR system as well as the system’s readiness for CDS. FUNDING This work was supported by National Library of Medicine grant no. R15 LM012335-01A1 (to RLR, CJS, BJD, TT, DJH, KK, GDF). BD was supported by the Robert Wood Johnson Foundation as a Future of Nursing Scholar. The views presented here are solely the responsibility of the authors and do not necessarily represent the official views of the National Library of Medicine or the National Institutes of Health or the Robert Wood Johnson Foundation. AUTHOR CONTRIBUTIONS BD prepared the guidelines for analysis with the assistance and domain expertise of GDF, CS, and TT. The interviews were conducted by RR, GDF, and CS. DH conducted analysis of the data. RR drafted the manuscript and all authors contributed to the writing. All authors contributed to study concept and design, interpretation of data, and critical revision of the manuscript. CONFLICT OF INTEREST STATEMENT KK reports honoraria, consulting, or sponsored research related to clinical decision support or standards-based interoperability with McKesson InterQual; Hitachi; Premier; Klesis Healthcare; Vanderbilt University; the University of Washington; the University of California, San Francisco; and the U.S. Office of the National Coordinator for Health IT (via ESAC, JBS International, A+ Government Solutions, Hausam Consulting, and Security Risk Solutions). These relationships have no direct relevance to the manuscript but are reported in the interest of full disclosure. CJS reports consulting or sponsored research related to clinical decision support with Council of State and Territorial Epidemiologists, Hitachi, and HLN consulting, but these relationships have no direct relevance to the manuscript but are reported in the interest of full disclosure. The other authors have no competing interests related to the content or publication of this manuscript. REFERENCES 1 Murphy EV. Clinical decision support: effectiveness in improving quality processes and clinical outcomes and factors that may influence success . Yale J Biol Med 2014 ; 87 ( 2 ): 187 – 97 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 2 Teich JM , Glaser JP , Beckley RF , et al. . The Brigham integrated computing system (BICS): advanced clinical systems in an academic hospital environment . Int J Med Inform 1999 ; 54 ( 3 ): 197 – 208 . Google Scholar Crossref Search ADS PubMed WorldCat 3 Swenson CJ , Appel A , Sheehan M , et al. . Using information technology to improve adult immunization delivery in an integrated urban health system . Jt Comm J Qual Patient Saf 2012 ; 38 ( 1 ): 15 – 23 . Google Scholar Crossref Search ADS PubMed WorldCat 4 Heekin AM , Kontor J , Sax HC , Keller MS , Wellington A , Weingarten S. Choosing Wisely clinical decision support adherence and associated inpatient outcomes . Am J Manag Care 2018 ; 24 ( 8 ): 361 – 6 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 5 Bodenheimer T , Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider . Ann Fam Med 2014 ; 12 ( 6 ): 573 – 6 . Google Scholar Crossref Search ADS PubMed WorldCat 6 Osheroff JA , Teich JM , Middleton B , Steen EB , Wright A , Detmer DE. A roadmap for national action on clinical decision support . J Am Med Inform Assoc 2007 ; 14 ( 2 ): 141 – 5 . Google Scholar Crossref Search ADS PubMed WorldCat 7 Osheroff JA. Improving Medication Use and Outcomes with Clinical Decision Support: A Step by Step Guide . Boca Raton, FL : Taylor and Francis ; 2015 . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC 8 Richardson JE , Middleton B , Osheroff JA , Callaham M , Marcial L , Blumenfeld BH. The PCOR CDS-LN Environmental Scan: Spurring Action by Identifying Barriers and Facilitators to the Dissemination of PCOR through PCOR-Based Clinical Decision Support . Research Triangle Park, NC : Patient-Centered Outcomes Research Clinical Decision Support Learning Network ; 2016 . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC 9 American Board of Internal Medicine. Choosing Wisely. 2015 . http://www.choosingwisely.org/. December 5, 2019. 10 American College of Emergency Physicians. Choosing Wisely. Five things physicians and patients should question. 2013 http://www.choosingwisely.org/societies/american-college-of-emergency-physicians/. Accessed December 5, 2019. 11 Boxwala AA , Rocha BH , Maviglia S , et al. . A multi-layered framework for disseminating knowledge for computer-based decision support . J Am Med Inform Assoc 2011 ; 18 (Suppl 1 ): i132 – 9 . Google Scholar Crossref Search ADS PubMed WorldCat 12 Shiffman RN , Michel G , Essaihi A , Thornquist E. Bridging the guideline implementation gap: a systematic, document-centered approach to guideline implementation . J Am Med Inform Assoc 2004 ; 11 ( 5 ): 418 – 26 . Google Scholar Crossref Search ADS PubMed WorldCat 13 Tso GJ , Tu SW , Oshiro C , et al. . Automating Guidelines for Clinical Decision Support: Knowledge Engineering and Implementation . AMIA Annu Symp Proc 2016 ; 2016 : 1189 – 98 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 14 Douthit BJ , Richesson RL. Emergency department clinician perspectives on the data availability to implement clinical decision support tools for five clinical practice guidelines . AMIA Jt Summits Transl Sci Proc 2018 ; 2017 : 340 – 8 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 15 Harris PA , Taylor R , Thielke R , Payne J , Gonzalez N , Conde JG. Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support . J Biomed Inform 2009 ; 42 ( 2 ): 377 – 81 . Google Scholar Crossref Search ADS PubMed WorldCat 16 Nezlek JB. An introduction to multilevel modeling for social and personality psychology . Soc Personal Psychol Compass 2008 ; 2 ( 2 ): 842 – 60 . Google Scholar Crossref Search ADS WorldCat 17 Ash JS , Sittig DF , Guappone KP , et al. . Recommended practices for computerized clinical decision support and knowledge management in community settings: a qualitative study . BMC Med Inform Decis Mak 2012 ; 12 ( 1 ): 6 . Google Scholar Crossref Search ADS PubMed WorldCat 18 Greenes RA , Bates DW , Kawamoto K , Middleton B , Osheroff J , Shahar Y. Clinical decision support models and frameworks: seeking to address research issues underlying implementation successes and failures . J Biomed Inform 2018 ; 78 : 134 – 43 . Google Scholar Crossref Search ADS PubMed WorldCat 19 Freimuth RR , Formea CM , Hoffman JM , Matey E , Peterson JF , Boyce RD. Implementing genomic clinical decision support for drug-based precision medicine . CPT Pharmacometrics Syst Pharmacol 2017 ; 6 ( 3 ): 153 – 5 . Google Scholar Crossref Search ADS PubMed WorldCat 20 Patel VL , Allen VG, Arocha JF, Shortliffe EH. Representing clinical guidelines in GLIF: individual and collaborative expertise. J Am Med Inform Assoc. 1998;5(5):467–83. 21 Melnick ER , Nielson JA , Finnell JT , et al. . Delphi consensus on the feasibility of translating the ACEP clinical policies into computerized clinical decision support . Ann Emerg Med 2010 ; 56 ( 4 ): 317 – 20 . Google Scholar Crossref Search ADS PubMed WorldCat 22 Kuhn T , Basch P , Barr M , Yackel T ; Medical Informatics Committee of the American College of Physicians. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians . Ann Intern Med 2015 ; 162 ( 4 ): 301 – 3 . Google Scholar Crossref Search ADS PubMed WorldCat 23 Office of the National Coordinator for Health Information Technology. U.S. Core Data for Interoperability (USCDI); 2019 ISA Reference Edition. 2018 . https://www.healthit.gov/isa/us-core-data-interoperability-uscdi. Accessed December 5, 2019. © The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of the American Medical Informatics Association Oxford University Press

Measuring implementation feasibility of clinical decision support alerts for clinical practice recommendations

Loading next page...
 
/lp/oxford-university-press/measuring-implementation-feasibility-of-clinical-decision-support-50nngFPXmf

References (25)

Publisher
Oxford University Press
Copyright
© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com
ISSN
1067-5027
eISSN
1527-974X
DOI
10.1093/jamia/ocz225
Publisher site
See Article on Publisher Site

Abstract

Abstract Objective The study sought to describe key features of clinical concepts and data required to implement clinical practice recommendations as clinical decision support (CDS) tools in electronic health record systems and to identify recommendation features that predict feasibility of implementation. Materials and Methods Using semistructured interviews, CDS implementers and clinician subject matter experts from 7 academic medical centers rated the feasibility of implementing 10 American College of Emergency Physicians Choosing Wisely Recommendations as electronic health record–embedded CDS and estimated the need for additional data collection. Ratings were combined with objective features of the guidelines to develop a predictive model for technical implementation feasibility. Results A linear mixed model showed that the need for new data collection was predictive of lower implementation feasibility. The number of clinical concepts in each recommendation, need for historical data, and ambiguity of clinical concepts were not predictive of implementation feasibility. Conclusions The availability of data and need for additional data collection are essential to assess the feasibility of CDS implementation. Authors of practice recommendations and guidelines can enable organizations to more rapidly assess data availability and feasibility of implementation by including operational definitions for required data. clinical decision support, clinical guidelines, feasibility assessment, implementation INTRODUCTION Clinical practice recommendations and guidelines enable clinical specialists and professional medical societies to disseminate information and best practices for high-value patient care. Most of the 100+ professional societies in the United States create and disseminate practice recommendations with the hope that they will be widely adopted. Adoption can be increased by presenting patient-specific, actionable recommendations from guidelines to providers within the appropriate clinical context using clinical decision support (CDS) interventions (eg, alerts, reminders, InfoButtons, order sets). CDS integrated into electronic health record (EHR) systems has been shown to change provider behavior and improve patient outcomes.1–4 However, the proliferation of certain kinds of CDS such as interruptive alerts has become overwhelming for providers. The growing phenomena of alert fatigue and frustration with EHR systems prompted the expansion of the “Triple Aim” to include a fourth aim focused on provider well-being, including satisfaction with EHR systems and their CDS features.5 Strategies for increasing providers’ acceptance and adherence to CDS include making CDS recommendations more patient-specific (eg, reduce false positives), eliminating the need to enter new data, and following the “Five Rights of CDS” principle (ie, provide the right information, to the right people, in the right formats, through the right channels, at the right points in workflow).6,7 All 3 strategies require a cognizance of what local data is available and whether the data are in a format (structure and content) that can be used by the CDS tool. To understand and assess the extent of alignment between the specific data (input) requirements for a CDS tool and the local EHR data, the CDS logic needs to be (1) adequately explicit so it can be represented in a computable logical format and matched to specific patient and clinical contexts (ie, the 5 Rights of CDS)6 and (2) designed to interoperate with existing EHR functions and clinical data in a structured and standardized format. However, the lack of EHR data and functional standards has made it challenging for CDS authors to make their CDS logic adequately explicit and interoperable for every variation of data schema, formatting, representation, and completeness of data. Consequently, in the absence of explicit and interoperable logic and data requirements, healthcare organizations expend a great deal of time and effort to define operational and computable definitions for clinical concepts in the guidelines.8 This work requires both clinical and technical expertise and is often done de novo at every organization for every new or proposed CDS project. Further, before implementing the CDS, this work is also done to determine feasibility and resources required to integrate the CDS into EHRs. A measure of CDS feasibility from the perspective of local data availability and readiness will enable organizations to estimate the technical effort required to implement a CDS intervention that will function effectively as intended, allowing them to prioritize and direct limited development resources to building CDS itself rather than assessing feasibility. To date, we have found no published literature that characterizes the “feasibility” of CDS from the perspective of data availability and readiness, nor is there a systematic and generalizable method to assess feasibility of CDS implementation. In addition, there is currently no standard way to report how “implementable” a guideline is in a particular EHR system. One aspect of implementation feasibility is the availability of necessary data without additional data collection by providers. The availability of necessary data also implies “data readiness” (ie, that the data are of adequate quality, available when needed, and in a format that can be readily used by the CDS). Implementers will benefit from methods that allow them to review potential CDS interventions and estimate the feasibility of implementing the CDS in their organization based on the availability and readiness of data as collected in their local systems. Such a tool requires an understanding of the key features that influence feasibility. Choosing Wisely (CW) recommendations are widely accepted by providers and are a good candidate for widespread implementation as CDS. The CW initiative has been adopted by 77 professional societies who agree to identify common practices that are not evidence-based, and provide recommendations designed to reduce those practices.9 Healthcare organizations wishing to implement CW recommendations as CDS must assess the feasibility of integrating CDS into their local systems, determine whether the resources required are justified against competing activities, and define an implementation strategy and timeline. Strategies are needed for implementers to quickly assess the feasibility of integrating clinical recommendations as CDS alerts and for authors to make their recommendations more implementable, and thereby accessible, for all healthcare organizations. To address this gap, the objectives of our research were to (1) describe key features of clinical concepts and data required to implement CW recommendations as CDS; (2) assess the feasibility, data availability, and requirements for additional data collection; and (3) identify features useful for predicting feasibility of implementing automated CDS for CW recommendations in EHR systems. MATERIALS AND METHODS Guideline selection We selected 10 CW recommendations. Among the 10 American College of Emergency Physicians (ACEP) CW recommendations, we selected 9 that were evidence-based recommendations of procedures to question or avoid for certain types of patients in the emergency department (ED).10 One recommendation (ACEP CW #3, refer to palliative care where appropriate) represents a value and policy statement rather than an action, and was replaced with a recommendation developed by the American College of Radiology (imaging approach for suspected appendicitis) relevant to ED practice (Table 1). Table 1. CW recommendations (2017) selected for analysis CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely. a Source: http://www.choosingwisely.org/societies/american-college-%20of-emergency-physicians/. b Source: http://www.choosingwisely.org/clinician-lists/american-college-radiology-ct-to-evaluate-appendicitis-in-children/. Open in new tab Table 1. CW recommendations (2017) selected for analysis CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. CW number . Recommendation . ACEP #1a Avoid CT scans of the head in emergency department patients with minor head injury who are at low risk based on validated decision rules. ACEP #2a Avoid placing indwelling urinary catheters in the emergency department for either urine output monitoring in stable patients who can void, or for patient or staff convenience. ACEP #4a Avoid antibiotics and wound cultures in emergency department patients with uncomplicated skin and soft tissue abscesses after successful incision and drainage and with adequate medical follow-up. ACEP #5a Avoid instituting intravenous fluids before doing a trial of oral rehydration therapy in uncomplicated emergency department cases of mild to moderate dehydration in children. ACEP #6a Avoid CT of the head in asymptomatic adult patients in the emergency department with syncope, insignificant trauma, and a normal neurological evaluation. ACEP #7a Avoid CT pulmonary angiography in emergency department patients with a low pretest probability of pulmonary embolism and either a negative Pulmonary Embolism Rule-Out Criteria or a negative D-dimer. ACEP #8a Avoid lumbar spine imaging in the emergency department for adults with nontraumatic back pain unless the patient has severe or progressive neurologic deficits or is suspected of having a serious underlying condition (such as vertebral infection, cauda equina syndrome, or cancer with bony metastasis). ACEP #9a Avoid prescribing antibiotics in the emergency department for uncomplicated sinusitis. ACEP #10a Avoid ordering CT of the abdomen and pelvis in young otherwise healthy emergency department patients (<50 years of age) with known histories of kidney stones, or ureterolithiasis, presenting with symptoms consistent with uncomplicated renal colic. ACR #1b Don’t do CT for the evaluation of suspected appendicitis in children until after ultrasound has been considered as an option. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely. a Source: http://www.choosingwisely.org/societies/american-college-%20of-emergency-physicians/. b Source: http://www.choosingwisely.org/clinician-lists/american-college-radiology-ct-to-evaluate-appendicitis-in-children/. Open in new tab Preparation of CW recommendations (translation to semistructured logic format) The 10 recommendations were transformed into a semistructured11 logic format using a portion of the Shiffman et al12 methodology for (automated) transitioning clinical guidelines into CDS, with clarification by Tso et al.13 A masters’ trained board-certified nurse informaticist (B.D.) reviewed each recommendation and applied the following steps: atomize (extract and refine discreet concepts from narrative recommendations), deabstract (adjust level of generality for decision variable or action to enable operationalization), and disambiguate (establish a single semantic interpretation for a recommendation statement). Subsequent atomization and deabstraction steps were iterative, referencing the literature supporting the recommendations as needed to clarify any vague concepts.12 For the disambiguation step, we reviewed each recommendation to identify detailed criteria or operational definitions for clinical concepts. As we came across clinical concepts that needed disambiguation, we reviewed all primary supporting references specifically mentioned in the recommendation. If we could find more details or an operational guideline from these references, we provided this information to reviewers in our structured interviews. We did not review references cited by the references. However, if a primary reference was a guideline or recommendation that had been updated since the CW recommendation was published, then we used the most recent version of the referenced guideline. In cases in which we found multiple operational definitions for a concept (arising from multiple references), we chose the criteria (ie, prioritized the reference) from a national guideline endorsed by ACEP or other trusted organization, such as the U.S. Centers for Disease Control and Prevention. We tried to stay as close to the recommendation as possible and did not define concepts beyond definitions provided in the recommendation and cited references. For example, we preserved the term suspected appendicitis (ACR CW #1), rather than list out the features of suspected appendicitis (eg, pain in lower abdomen, fever). We did attempt to operationalize concepts using reasonable exclusions derived from the CW recommendation. For example, for “mild, uncomplicated abscess” (ACEP CW #4), we asserted an exclusion for a severe abscess in our structured CDS logic. A spreadsheet of the modified logic was reviewed by 2 CDS experts (G.D.F. and C.S.) to ascertain consistency in method and level of abstraction. Further, a clinical domain expert (T.T.) reviewed logic, operational definitions, and supporting references. The final logic for each CW recommendation was organized into a consistent semistructured format for interviews with CDS implementers (Figure 1). Figure 1. Open in new tabDownload slide Example of Choosing Wisely recommendation logic in semistructured format organized for expert review (see Supplementary Appendix for all 10 guidelines). ACEP: American College of Emergency Physicians; CT: computed tomography. Figure 1. Open in new tabDownload slide Example of Choosing Wisely recommendation logic in semistructured format organized for expert review (see Supplementary Appendix for all 10 guidelines). ACEP: American College of Emergency Physicians; CT: computed tomography. Study site selection We selected a convenience sample of 7 academic medical centers, including those with which we were affiliated or had professional contacts. Five of the centers used the Epic EHR system (Epic Systems, Verona, WI) and 2 centers used the Cerner EHR system (Cerner, Kansas City, MO). Study participants We conducted a semistructured interview with dyads consisting of a system analyst (CDS implementer) and a clinician from each of the 7 centers between May and August 2018. The CDS implementers were required to have considerable experience with CDS implementation and familiarity with their institution’s EHR system. Clinicians were all physicians that worked primarily in emergency medicine or in an urgent care setting and had experience implementing CDS as clinical subject matter experts. Data collection Procedures for semistructured interviews Each semistructured interview was 1-1.5 hours in duration and was conducted using Web meeting software (WebEx; Cisco, San Jose, CA). Each interview had a moderator (R.R.) who used a script and standard set of slides presenting semistructured guideline logic and interview questions (see Supplementary Appendix). We presented one guideline on the screen at a time, in both narrative and structured logic formats. Each CDS implementer was asked to think-aloud about their reasoning for rating the implementation feasibility and data availability (described in the following section) and encouraged to ask the clinical expert questions regarding documentation practices, data quality, and clinical concepts. A moderator (R.R.) solicited ratings for each CW recommendation using Likert-type scale questions and facilitated discussion using open-ended questions. The interviews were transcribed and notes collected as described inDouthit and Richesson.14 The clinicians and CDS implementers were each compensated for participating in the interview. Measures Two Likert-type scale questions (Figure 2) were directed to the CDS implementer to rate the feasibility and data availability of each of the 10 recommendations. The CDS implementer entered his or her ratings into an online (REDCap [Research Electronic Data Capture])15 questionnaire during the interview. Figure 2. Open in new tabDownload slide Questions asked during semistructured interviews. BPA: best practice alert; EHR: electronic health record. Figure 2. Open in new tabDownload slide Questions asked during semistructured interviews. BPA: best practice alert; EHR: electronic health record. Characterizing features of concepts in the recommendations To quantify features of the sampled CW recommendations, we counted the number of clinical concepts in each guideline, assessed whether each clinical concept required historical data, and assessed the ambiguity (ie, clarity of definition) of each concept. We used 2 independent reviewers (R.R. and B.D.) and a 3-point Likert-type scale (0 = no definition, cannot be operationalized; 1 = not operationalized but could be with effort; 2 = clearly defined and already operational). Discordant concepts were discussed and given a consensus score. Data analysis We used linear mixed models, which allow for the analysis of hierarchically organized data,16 to examine relationships between the reported feasibility ratings and need for additional data, and other characteristics (eg, number of clinical concepts, need for historical data, use of ambiguous concepts) of the recommendations. This was done because feasibility ratings for each CW recommendation were assessed by multiple (n = 7) raters at their respective sites. For our analyses, we nested the feasibility ratings within CW recommendations, which allowed us to examine which recommendation characteristics were associated with feasibility, while also considering variability in feasibility between sites. To assess the proportion of variability in feasibility ratings due to differences between CW recommendations, and the correlation between feasibility ratings within each CW recommendation, an intraclass correlation was computed using a random effects analysis of variance model, which includes a random intercept only. Two models were developed for our main analysis examining relationships between feasibility ratings, need for additional data, and other recommendation characteristics. Our first model tested the need for additional data (by physicians at point of care) as a predictor of the feasibility rating. In a second model, other characteristics of the CW recommendation were added, including number of clinical concepts, proportion of concepts that used historical data, and proportion of concepts that were unambiguous. In both models, the random intercept was retained, along with a fixed effect for site number, to test for differences in feasibility ratings between sites. The study was approved by the Institutional Review Board of Duke University Health System (Pro00076602). RESULTS Features of sampled recommendations The 10 CW recommendations reference a total of 86 concepts (median 8 [range, 5-13] concepts per recommendation) (Table 2). Several concepts (eg, age, order for antibiotic) were used by more than 1 recommendation; therefore, a total of 73 unique concepts are required for implementing CDS based on the 10 CW recommendations. All of the recommendations have a high proportion of concepts that were determined to be ambiguous (ranging from 42% to 100%). Some concepts were considered ambiguous because they lacked operational definitions (eg, “fever,” “immunocompromised,” “severe/progressive neurologic deficits”). Other concepts were ambiguous because the concept is subjective or difficult to define consistently across providers (eg, “otherwise healthy,” “dangerous mechanism of injury,” “required immobilization for trauma or surgery”). In contrast, only 4 recommendations include concepts that related to historical data (eg, “recent spinal injection,” “persistent illness (>= 10 days),” “history of kidney stones”), and the proportion of concepts with this feature represent only 13%-50% of the concepts for the individual recommendation. Table 2. Description of the concepts included in the 10 CW recommendations CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; IV: intravenous. Open in new tab Table 2. Description of the concepts included in the 10 CW recommendations CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 CW number . Topic . Concepts used by the logic . Ambiguous concepts (%) . Historical concepts (%) . ACEP #1 Head CT for minor head injury 13 77 0 ACEP #2 Indwelling urinary catheters 8 50 0 ACEP #4 Antibiotics and cultures for uncomplicated tissue abscesses 10 90 0 ACEP #5 IV fluids for mild to moderate dehydration in children 12 42 0 ACEP #6 Head CT with syncope and insignificant trauma 6 67 0 ACEP #7 CT pulmonary angiography with low probability of PE 5 100 0 ACEP #8 Lumbar spine imaging for nontraumatic back pain 11 55 27 ACEP #9 Antibiotics for uncomplicated sinusitis 6 67 50 ACEP #10 CT of the abdomen and pelvis with known histories 7 71 29 ACR #1 CT for suspected appendicitis in children 8 88 13 ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; IV: intravenous. Open in new tab Interviews with CDS implementers and clinical experts We conducted 7 semistructured interviews with dyads of CDS implementers and clinicians from 7 different sites. CDS implementers had substantial informatics and EHRs experience (see Supplementary Table 1). Clinicians were all physicians that worked primarily in emergency medicine (n = 6) or in a primary care and urgent care setting (n = 1) and had substantial experience supporting CDS implementation at their institution. Feasibility, data availability, and additional data collection requirements The scores and ranges for implementation feasibility and additional data collection are presented in Table 3. Of the 10 CW recommendations, feasibility scores ranged from 2 to 4 of 5 (mean of median scores for each guideline = 3.3), and the median need for additional data collection ranged from 2 to 3 (mean of median scores for each recommendation = 2.7). Table 3. Reported scores (median and range) for need for additional data collection and feasibility from the sampled sites (N = 7) CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb Yellow rectangles indicate range (minimum and maximum) and red diamonds indicate median. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; PE: pulmonary embolism. a 1 = no data collection to 5 = prohibitive data collection. b 1 = much easier to 5 = much more difficult. Open in new tab Table 3. Reported scores (median and range) for need for additional data collection and feasibility from the sampled sites (N = 7) CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb CW number . Description . Variable . . ACEP #1 Head CT for minor head injury Additional data collectiona Feasibilityb ACEP #2 Indwelling urinary catheters Additional data collectiona Feasibilityb ACEP #4 Antibiotics and cultures for uncomplicated Additional data collectiona Feasibilityb ACEP #5 IV fluids for mild to moderate dehydration in children Additional data collectiona Feasibilityb ACEP #6 Head CT with syncope and insignificant trauma Additional data collectiona Feasibilityb ACEP #7 CT pulmonary angiography with low probability of PE Additional data collectiona Feasibilityb ACEP #8 Lumbar spine imaging for nontraumatic back pain Additional data collectiona Feasibilityb ACEP #9 Antibiotics for uncomplicated sinusitis Additional data collectiona Feasibilityb ACEP #10 CT of the abdomen and pelvis with known histories Additional data collectiona Feasibilityb ACR #1 CT for suspected appendicitis in children Additional data collectiona Feasibilityb Yellow rectangles indicate range (minimum and maximum) and red diamonds indicate median. ACEP: American College of Emergency Physicians; ACR: American College of Radiology; CT: computed tomography; CW: Choosing Wisely; PE: pulmonary embolism. a 1 = no data collection to 5 = prohibitive data collection. b 1 = much easier to 5 = much more difficult. Open in new tab Factors influencing feasibility of ACEP CW recommendations Findings from preliminary random effects analysis of variance found an intraclass correlation of .35, which indicates that 35% of the variability in feasibility ratings was due to differences between CWs. Results of linear mixed models examining relationships between feasibility ratings, need for additional data, and other CW recommendation characteristics are presented in Table 4. Results for our initial model indicate that lower scores on need for additional data are related to lower scores on feasibility (B = 0.45, t = 4.87, P < .001); that is, less need for additional data was related to greater feasibility. Feasibility ratings did not differ by site (F6,53 = 2.04, P = .08). In the second model, which included other CW recommendation characteristics, the need for additional data remained significantly related to feasibility (B = 0.43, t = 4.64, P < .001). Feasibility ratings remained unrelated to site (F6,53 = 2.02, P = .08). Feasibility ratings were also unrelated to the number of concepts in the recommendation (B = 0.12, t = 1.51, P = .14), or the proportion concepts that were ambiguous (B = 0.84, t = 0.77, P = .44) or historical (B = 0.28, t = 0.25, P = .80). Table 4. CDS feasibility ratings on need for additional data and CDS characteristics . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 CDS: clinical decision support. Open in new tab Table 4. CDS feasibility ratings on need for additional data and CDS characteristics . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 . Model 1 . Model 2 . . B . Test statistic . P value . B . Test statistic . P value . Need for additional data 0.45 t = 4.87 <.001 0.43 t = 4.64 <.001 Site F6,53 = 2.04 .08 F6,53 = 2.02 .08 1 (Reference) 2 0.21 t = 0.69 .49 0.20 t = 0.66 .51 3 0.37 t = 1.12 .27 0.34 t = 1.05 .30 4 0.18 t = 0.56 .58 0.16 t = 0.49 .62 5 –0.33 t = –1.01 .31 –0.36 t = –1.08 .29 6 –0.40 t = –1.28 .20 –0.41 t = –1.33 .19 7 –0.30 t = –1.00 .32 –0.30 t = –1.00 .32 Number of concepts — — — 0.12 t = 1.51 .14 Proportion of ambiguous concepts — — — 0.84 t = 0.77 .44 Proportion historical concepts — — — 0.28 t = 0.25 .80 CDS: clinical decision support. Open in new tab DISCUSSION To our knowledge, this is the first study to quantify the association between feasibility of CDS implementation and features of clinical guidelines. Our linear mixed models show that the need for new data collection was predictive of lower implementation feasibility, while the number of clinical concepts in each recommendation, need for historical data, and ambiguity of clinical concepts were not predictive of implementation feasibility. Our findings suggest that the need for additional data collection is an essential factor in the technical feasibility of proposed CDS tools. The CDS implementers that we interviewed reported the feasibility of implementing the CW recommendations as generally low, entailing some level of difficulty for all 7 sampled sites. Further, all the recommendations we sampled required at least some additional data entry by providers. While additional data collection was required to implement the recommendations and data collection impacted feasibility, our data suggests that interviewees did not, in general, perceive the additional data collection to be prohibitive. In our model,1 lower need for additional data predicts greater implementation feasibility. Although our model is simple and intuitive, it does show that availability of existing data, rather than complexity of data requirements, is the strongest predictor of CDS implementation feasibility. Our results provide a hypothesis that, ideally, would be assessed empirically in future studies. Nearly two-thirds of the variation in feasibility remains unexplained by our model. We are currently analyzing the comments from our interviews to identify other factors that impact the feasibility of implementing CDS tools and will report this in a future article. Our study has limitations that may impact generalizability. Our sample was limited to 10 CW recommendations that are focused on the ED setting and preventing errors of commission (ie, they are all about what not to do). Our feasibility assessment approach should be repeated for other settings and for other kinds of guidelines, such as those focused on preventing errors of omission (ie, reminders to perform certain procedures). In addition, our study was limited to organizations using only 2 vendor systems focused on the tertiary care market (ie, Epic and Cerner); however, these 2 systems are the 2 most prevalent EHR systems and represent a large market share. Because we conducted our investigation using 7 academic medical centers and prevailing EHR systems, our results are likely to generalize to similar centers using the same products. However, our results may not generalize to nonacademic medical centers using other EHR products and most likely do not generalize to less resourced settings, such as independent community practices and safety net clinics. Finally, we did not assess the reliability and validity of our Likert-type scale questions, and do not know the extent to which the clinical experts’ and CDS implementers’ responses accurately reflect the actual data or system readiness of their organizations. Despite these limitations, our approach has 2 important strengths. First, although the lack of sufficiently structured and detailed data is a well-known barrier for CDS implementation,17–19 this problem is not well quantified. We believe we are the first to quantitatively and systematically assess the relationship between data availability and feasibility of CDS implementation. Second, we preserved the nature of CDS requests that implementers first see by presenting the logic in a state as close as possible to the original recommendations. Although many of the concepts referenced in the sampled recommendations were ambiguous, we did not provide operational definitions (beyond the CW recommendations or supporting references) when preparing the logic for CDS experts to review and rate. This allowed us to use our structured interviews to investigate and quantify the perceived “knowledge engineering” effort required to conceptualize and operationally define ambiguous concepts, which have been a known issue for decades20 and are a significant challenge for CDS planning and implementation.11 We found the feasibility for a sample of CW recommendations to be generally low, suggesting that organizations might have to commit substantial resources to their implementation. Our findings are consistent with a previous assessment of the 2008 ACEP clinical policies, which found that those recommendations were too vague, required additional physician input or knowledge for translation, and when translated would impede clinical workflow because of excessive data entry.21 Authors of CW and other recommendations can ease the burden of implementing recommendations into CDS by providing operational definitions and guidance for potential implementers. In some cases, it may be necessary to ask users to collect additional data that is critical to the logic of the CDS in order to ensure that the intervention functions as intended. However, given the rising frustration around increasing data entry requirements, the cost of any additional data capture for CDS should be heavily considered.22 Organizations can also consider using surrogate data or natural language processing approaches to provide the needed data at lower burden to providers.22 These are all prominent and active issues in clinical informatics and our results are not unexpected. What this work does contribute, however, is a quantification of the relationship between data availability and CDS implementation in a high-priority domain. Currently, assessing feasibility for implementing new CDS is a time and resource-intensive process that is unique to each organization. Our work demonstrates that we can characterize and quantify the features of clinical practice recommendations and use those characteristics to predict feasibility of implementing clinical practice recommendations as CDS tools. Clearly defined CDS data requirements will help implementers assess CDS implementation feasibility and effort, and the use of data representation standards will enable the reuse of tools and possible automation of the feasibility assessment for CDS. Widespread adoption of the U.S. Core Data for Interoperability and other common data elements would enable health systems and EHR vendors to understand the availability of clinical data that matches CDS requirements, leading to faster feasibility assessment and implementation.23 To support this vision, clinical specialty societies can identify and promote standard data elements that will support CDS—and subsequent quality measurement—for emerging recommendations. CONCLUSIONS As is, the CW recommendations we examined require significant work for feasibility assessment and implementation. A critical determinant of guideline implementation feasibility is the availability of existing data that match requirements of the CDS, averting the need for additional data entry. Guideline authors can reduce the burden of assessing a recommendation’s readiness for CDS by including operational definitions for guideline logic, ideally mapped to reference data standards. The adoption of standard clinical data elements in EHR systems and guideline logic can support the automated assessment of a guideline’s readiness for CDS within the EHR system as well as the system’s readiness for CDS. FUNDING This work was supported by National Library of Medicine grant no. R15 LM012335-01A1 (to RLR, CJS, BJD, TT, DJH, KK, GDF). BD was supported by the Robert Wood Johnson Foundation as a Future of Nursing Scholar. The views presented here are solely the responsibility of the authors and do not necessarily represent the official views of the National Library of Medicine or the National Institutes of Health or the Robert Wood Johnson Foundation. AUTHOR CONTRIBUTIONS BD prepared the guidelines for analysis with the assistance and domain expertise of GDF, CS, and TT. The interviews were conducted by RR, GDF, and CS. DH conducted analysis of the data. RR drafted the manuscript and all authors contributed to the writing. All authors contributed to study concept and design, interpretation of data, and critical revision of the manuscript. CONFLICT OF INTEREST STATEMENT KK reports honoraria, consulting, or sponsored research related to clinical decision support or standards-based interoperability with McKesson InterQual; Hitachi; Premier; Klesis Healthcare; Vanderbilt University; the University of Washington; the University of California, San Francisco; and the U.S. Office of the National Coordinator for Health IT (via ESAC, JBS International, A+ Government Solutions, Hausam Consulting, and Security Risk Solutions). These relationships have no direct relevance to the manuscript but are reported in the interest of full disclosure. CJS reports consulting or sponsored research related to clinical decision support with Council of State and Territorial Epidemiologists, Hitachi, and HLN consulting, but these relationships have no direct relevance to the manuscript but are reported in the interest of full disclosure. The other authors have no competing interests related to the content or publication of this manuscript. REFERENCES 1 Murphy EV. Clinical decision support: effectiveness in improving quality processes and clinical outcomes and factors that may influence success . Yale J Biol Med 2014 ; 87 ( 2 ): 187 – 97 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 2 Teich JM , Glaser JP , Beckley RF , et al. . The Brigham integrated computing system (BICS): advanced clinical systems in an academic hospital environment . Int J Med Inform 1999 ; 54 ( 3 ): 197 – 208 . Google Scholar Crossref Search ADS PubMed WorldCat 3 Swenson CJ , Appel A , Sheehan M , et al. . Using information technology to improve adult immunization delivery in an integrated urban health system . Jt Comm J Qual Patient Saf 2012 ; 38 ( 1 ): 15 – 23 . Google Scholar Crossref Search ADS PubMed WorldCat 4 Heekin AM , Kontor J , Sax HC , Keller MS , Wellington A , Weingarten S. Choosing Wisely clinical decision support adherence and associated inpatient outcomes . Am J Manag Care 2018 ; 24 ( 8 ): 361 – 6 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 5 Bodenheimer T , Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider . Ann Fam Med 2014 ; 12 ( 6 ): 573 – 6 . Google Scholar Crossref Search ADS PubMed WorldCat 6 Osheroff JA , Teich JM , Middleton B , Steen EB , Wright A , Detmer DE. A roadmap for national action on clinical decision support . J Am Med Inform Assoc 2007 ; 14 ( 2 ): 141 – 5 . Google Scholar Crossref Search ADS PubMed WorldCat 7 Osheroff JA. Improving Medication Use and Outcomes with Clinical Decision Support: A Step by Step Guide . Boca Raton, FL : Taylor and Francis ; 2015 . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC 8 Richardson JE , Middleton B , Osheroff JA , Callaham M , Marcial L , Blumenfeld BH. The PCOR CDS-LN Environmental Scan: Spurring Action by Identifying Barriers and Facilitators to the Dissemination of PCOR through PCOR-Based Clinical Decision Support . Research Triangle Park, NC : Patient-Centered Outcomes Research Clinical Decision Support Learning Network ; 2016 . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC 9 American Board of Internal Medicine. Choosing Wisely. 2015 . http://www.choosingwisely.org/. December 5, 2019. 10 American College of Emergency Physicians. Choosing Wisely. Five things physicians and patients should question. 2013 http://www.choosingwisely.org/societies/american-college-of-emergency-physicians/. Accessed December 5, 2019. 11 Boxwala AA , Rocha BH , Maviglia S , et al. . A multi-layered framework for disseminating knowledge for computer-based decision support . J Am Med Inform Assoc 2011 ; 18 (Suppl 1 ): i132 – 9 . Google Scholar Crossref Search ADS PubMed WorldCat 12 Shiffman RN , Michel G , Essaihi A , Thornquist E. Bridging the guideline implementation gap: a systematic, document-centered approach to guideline implementation . J Am Med Inform Assoc 2004 ; 11 ( 5 ): 418 – 26 . Google Scholar Crossref Search ADS PubMed WorldCat 13 Tso GJ , Tu SW , Oshiro C , et al. . Automating Guidelines for Clinical Decision Support: Knowledge Engineering and Implementation . AMIA Annu Symp Proc 2016 ; 2016 : 1189 – 98 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 14 Douthit BJ , Richesson RL. Emergency department clinician perspectives on the data availability to implement clinical decision support tools for five clinical practice guidelines . AMIA Jt Summits Transl Sci Proc 2018 ; 2017 : 340 – 8 . Google Scholar PubMed OpenURL Placeholder Text WorldCat 15 Harris PA , Taylor R , Thielke R , Payne J , Gonzalez N , Conde JG. Research electronic data capture (REDCap) - A metadata-driven methodology and workflow process for providing translational research informatics support . J Biomed Inform 2009 ; 42 ( 2 ): 377 – 81 . Google Scholar Crossref Search ADS PubMed WorldCat 16 Nezlek JB. An introduction to multilevel modeling for social and personality psychology . Soc Personal Psychol Compass 2008 ; 2 ( 2 ): 842 – 60 . Google Scholar Crossref Search ADS WorldCat 17 Ash JS , Sittig DF , Guappone KP , et al. . Recommended practices for computerized clinical decision support and knowledge management in community settings: a qualitative study . BMC Med Inform Decis Mak 2012 ; 12 ( 1 ): 6 . Google Scholar Crossref Search ADS PubMed WorldCat 18 Greenes RA , Bates DW , Kawamoto K , Middleton B , Osheroff J , Shahar Y. Clinical decision support models and frameworks: seeking to address research issues underlying implementation successes and failures . J Biomed Inform 2018 ; 78 : 134 – 43 . Google Scholar Crossref Search ADS PubMed WorldCat 19 Freimuth RR , Formea CM , Hoffman JM , Matey E , Peterson JF , Boyce RD. Implementing genomic clinical decision support for drug-based precision medicine . CPT Pharmacometrics Syst Pharmacol 2017 ; 6 ( 3 ): 153 – 5 . Google Scholar Crossref Search ADS PubMed WorldCat 20 Patel VL , Allen VG, Arocha JF, Shortliffe EH. Representing clinical guidelines in GLIF: individual and collaborative expertise. J Am Med Inform Assoc. 1998;5(5):467–83. 21 Melnick ER , Nielson JA , Finnell JT , et al. . Delphi consensus on the feasibility of translating the ACEP clinical policies into computerized clinical decision support . Ann Emerg Med 2010 ; 56 ( 4 ): 317 – 20 . Google Scholar Crossref Search ADS PubMed WorldCat 22 Kuhn T , Basch P , Barr M , Yackel T ; Medical Informatics Committee of the American College of Physicians. Clinical documentation in the 21st century: executive summary of a policy position paper from the American College of Physicians . Ann Intern Med 2015 ; 162 ( 4 ): 301 – 3 . Google Scholar Crossref Search ADS PubMed WorldCat 23 Office of the National Coordinator for Health Information Technology. U.S. Core Data for Interoperability (USCDI); 2019 ISA Reference Edition. 2018 . https://www.healthit.gov/isa/us-core-data-interoperability-uscdi. Accessed December 5, 2019. © The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

Journal of the American Medical Informatics AssociationOxford University Press

Published: Apr 1, 2020

There are no references for this article.