Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Mapping of HIE CT terms to LOINC®: analysis of content-dependent coverage and coverage improvement through new term creation

Mapping of HIE CT terms to LOINC®: analysis of content-dependent coverage and coverage... Abstract Objective We describe and evaluate the mapping of computerized tomography (CT) terms from 40 hospitals participating in a health information exchange (HIE) to a standard terminology. Methods Proprietary CT exam terms and corresponding exam frequency data were obtained from 40 participant HIE sites that transmitted radiology data to the HIE from January 2013 through October 2015. These terms were mapped to the Logical Observations Identifiers Names and Codes (LOINC®) terminology using the Regenstrief LOINC mapping assistant (RELMA) beginning in January 2016. Terms without initial LOINC match were submitted to LOINC as new term requests on an ongoing basis. After new LOINC terms were created, proprietary terms without an initial match were reviewed and mapped to these new LOINC terms where appropriate. Content type and token coverage were calculated for the LOINC version at the time of initial mapping (v2.54) and for the most recently released version at the time of our analysis (v2.63). Descriptive analysis was performed to assess for significant differences in content-dependent coverage between the 2 versions. Results LOINC’s content type and token coverages of HIE CT exam terms for version 2.54 were 83% and 95%, respectively. Two-hundred-fifteen new LOINC CT terms were created in the interval between the releases of version 2.54 and 2.63, and content type and token coverages, respectively, increased to 93% and 99% (P < .001). Conclusion LOINC’s content type coverage of proprietary CT terms across 40 HIE sites was 83% but improved significantly to 93% following new term creation. data integration and standardization, clinical decision support systems, computed tomography, health information exchange, radiation dosage, alerting systems INTRODUCTION Background and significance Over the last decade, information technology has made possible secure sharing of clinical data through health information exchanges (HIEs).1 This enables clinicians to obtain information about patients seeking care across multiple organizations, not only aiding clinical decision making but also avoiding repeat examinations that might be unnecessary. Previous work found that approximately 3% of patients in a regional health network in New York City had computed tomography (CT) exams performed at more than one location. Further, more than 50% of these patients had the exact same CT at more than one site.2 These crossover CT scans are potentially avoidable, and motivate a prior CT alerting system that would notify ordering physicians of previous CTs from other sites that do not otherwise share clinical data. Such a system does not currently exist, in part because most sites performing CTs use proprietary, site-specific radiology procedure terms that impede interoperability. Mapping these local terms to standardized terms is a foundational first step to a prior CT alert system.3,4 Until 2015, there were 2 predominate terminology standards for radiology orderables: (1) the RadLex™ Playbook (http://playbook.radlex.org) based on the Radiological Society of North America RadLex lexicon5 and (2) the Logical Observation Identifiers Names and Codes (LOINC®) standard (https://loinc.org), which was developed by the Regenstrief Institute.6 In December 2015, these 2 radiology terminology standards were harmonized into a single LOINC/RSNA Radiology Playbook standard.7 The unified terminology has become the preferred standard for radiology procedures. We previously evaluated the performance of LOINC and RadLex separately to characterize CT content coverage from 3 sites in an HIE.8 These standard terminologies together accounted for nearly 99% of the exams performed. Objective The objectives of this current study are: (1) to assess content type and content token (content-dependent) coverage of the LOINC/RSNA Radiology Playbook and mapping reliability for CTs across the entire Healthix HIE (a large HIE serving the New York metropolitan area), (2) to describe the process for requesting new LOINC terms, and (3) to analyze the specific contribution of newly created LOINC terms to CT exam term coverage. METHODS Data sources and retrieval Through the following method, comprehensive lists of CT exam codes and descriptive names were obtained for each of the 40 Healthix sites that transmitted radiology data from January 1, 2013, through October 31, 2015.9 Each site’s database of radiology exams performed over the study period was queried for unique radiology exam names and codes. To identify CTs, we extracted exam names that matched filters with strings likely to indicate CT (eg, CT, CAT, Computed Tomography). We then applied filters to exclude exams that contained the string “CT” but were non-CT exams (eg, Nuclear Medicine Octreotide Study). We further categorized these exams into diagnostic, procedural, and administrative/post-processing CTs with additional filters. Procedural CTs were identified through string searches such as “guidance” and “biopsy.” Administrative and post-processing CTs were identified with strings such as “3D,” “consult,” “outside,” and “multiplanar.” CT exams not matching these filters were placed in the diagnostic category as a default. The resultant list was manually reviewed by 1 author (AOB), a board-certified radiologist and informatician, to ensure correct categorization and that no residual non-CT exams were present. Standard terminologies The LOINC/RSNA Radiology Playbook, initially released in December 2015 as part of LOINC version 2.54, was developed through collaboration between Regenstrief Institute and the RSNA, and with support from the National Institute of Biomedical Imaging and Bioengineering (NIBIB).7,10 This no-cost product combines and unifies useful aspects of LOINC Radiology and the RadLex Playbook. The unified terminology provides a single standard for representing radiology procedures. In the new model, radiology procedures are identified with LOINC codes and given a structured name with attributes that are linked both to LOINC parts and concepts in RadLex clinical terms. It also contains mappings of previously used RadLex Playbook Identifiers (RPIDs), which are unique codes used in the RadLex Playbook to identify specific imaging studies, with equivalent LOINC codes. For example, for the imaging study “CT Head w/o contrast IV,” the LOINC/RSNA Playbook displays both the LOINC Code (30799-1) and RPID (RPID22). The initial release, published with LOINC version 2.54 (December 2015), was limited to CT procedures. The LOINC/RSNA Radiology Playbook was incrementally updated with twice-yearly LOINC releases that occur in June and December. Overall, there were 5 major LOINC releases during the study period. These releases expanded the unified content coverage to include MRI, x-ray, ultrasound, nuclear medicine, mammography, and other imaging modalities.11 The most recent release, version 2.63 (December 2017), represents the culmination of the harmonization process. With the unification complete, the standard is now jointly maintained on an ongoing basis by both organizations. Mapping Since LOINC codes serve as the primary identifiers in the LOINC/RSNA Radiology Playbook, we elected to map Healthix site CT codes to LOINC codes. From January 2016 to July 2017, we mapped proprietary CT exam terms to LOINC on a site-by-site basis. One author (AOB) mapped all terms, and a randomly selected subset of 2000 local Healthix CT terms was also mapped by another author (FTL) to evaluate inter-rater reliability. All mapping was performed on a term-by-term basis using the RELMA program. Categorization of terms CT exams at each site were categorized as (1) diagnostic, (2) procedural, or (3) post-processing/administrative by the filtering method described earlier. In addition, we categorized each term into 2 categories reflecting its ease of mapping to LOINC using RELMA: straightforward and inconclusive. Straightforward terms are those that could be mapped to LOINC without need for any further investigation. For the purposes of this analysis, an inconclusive term is defined as one that cannot be mapped to a LOINC term using RELMA based on its descriptive exam name alone. These inconclusive terms are a heterogeneous group that includes terms with straightforward exam descriptions but without a clear LOINC match, as well as terms with ambiguous exam descriptions that require further investigation. These ambiguous terms can be further subdivided into: those with and those without a LOINC match following further investigation. Prior work has shown that inconclusive local terms require significantly greater resources and effort to map.12 Therefore, we tracked the prevalence of straightforward and inconclusive terms, including the ambiguous sub-category of inconclusive terms. To identify distinguishing characteristics of straightforward and ambiguous inconclusive terms, we also analyzed semantic and syntactic differences in a random sample of 100 terms from each category. We chose to contrast straightforward terms with the ambiguous subcategory of inconclusive codes, as opposed to the entire category of inconclusive codes (which would include terms with straightforward exam name attributes without a LOINC match), because we believed that it would illuminate best practices for local naming conventions. Requesting new LOINC terms For exams without a matching LOINC term, we created temporary “placeholder” terms in our database and submitted periodic requests to the Regenstrief Institute for the creation of new LOINC terms. We followed the standard request procedure described on the LOINC website.13 Because we were using the RELMA program for mapping, we elected to use the built-in features for managing and uploading our LOINC requests. Typically, we sent requests containing a batch of approximately 50 new terms. Representatives from Regenstrief typically contacted us with any questions 4 to 6 weeks following submission. After 2 to 3 months, they would send a summary response spreadsheet, which included the newly created LOINC terms and those for which LOINC decided not to create a new term along with the rationale. Per LOINC’s policy, newly created terms that have passed their quality assurance process are published publicly on the LOINC website in advance of the next upcoming release.14 As new LOINC terms were created, we replaced temporary placeholder terms in our database, and the mappings at all sites were updated. Coverage analysis We used an established approach for evaluating the content of standard terminologies based on the following content-dependent metrics:15 Concept Type Coverage: percentage of unique test names (types) from a site that map to standardized terminology. For example, if 85 of 100 exam names at an institution map to LOINC terms, the concept type coverage would be 85%. Concept Token Coverage: percentage of actual exams performed (tokens) that map to standardized terminology. Similarly, if 95 000 of an institution’s 100 000 annual exams performed map to LOINC terms, the concept token coverage would be 95%. The distinction between these metrics is important. For example, although an institution may have low concept type coverage (eg, only 40% to 50% of its exam terms map to LOINC), it may still possess a high concept token coverage (> 90% of exams performed at that institution map to LOINC). Such a case indicates that a few terms can account for most exams performed. A high concept token coverage is particularly important for a future HIE-based prior CT alerting system. Concept type and token coverages were calculated for each site and for all 40 Healthix sites combined. For each site and all Healthix sites combined, we also performed content analysis based on ease of mapping (all terms vs inconclusive terms) and reason for exam (diagnostic, procedural, and post-processing terms). To assess LOINC’s incremental content-dependent coverage, we analyzed concept type and token prior to the creation of new LOINC terms based on our requests (LOINC version 2.54, December 2015) and after (LOINC version 2.63, December 2017). To characterize LOINC’s coverage in the context of our intended use case, we calculated the number of unique LOINC CT terms and temporary placeholder terms (for instances in which there was no appropriate LOINC term) that together account for the top 80%, 90%, 95%, and 99% of the total CT exams performed across the entire Healthix HIE. We then determined the number of those terms for which there was an appropriate LOINC term. We performed this analysis for LOINC versions before and after inclusion of new LOINC terms (versions 2.54 vs 2.63) to assess for incremental coverage. The rationale for this analysis was that even if all local terms could not be mapped to LOINC, the coverage could still be sufficient for practical applications if those mappings covered a high percentage of the actual exams performed. In our future repeat CT alerting system, we anticipate that frequently performed exams would be among those likely to be repeated across institutions. All statistical analyses were performed using the R Statistical Computing Package (https://www.r-project.org, Vienna, Austria). Median content type and token coverages across all sites were compared for LOINC versions 2.54 vs 2.63 using the Wilcoxon Signed-Rank test. Composite content type and token coverages for all sites combined as well as LOINC’s coverage of terms that accounted for the top 80%, 90%, 95%, and 99% of exams performed, were compared for LOINC versions 2.54 vs 2.63 using the McNemar Test. The Mount Sinai Institutional Review Board reviewed this study protocol and determined it is “not human research” and exempt from formal IRB review. RESULTS Coverage analysis There were 10 539 CT exam terms from 40 sites across 23 health systems. Of these, 9116 (86%) were diagnostic, 1261 (12%) were procedural, and 162 (< 2%) were post-processing/administrative. Using LOINC version 2.54, 23% (2462/10 539) of all exam terms were categorized as inconclusive. Inconclusive terms comprised 18% (1689/9116), 53% (664/1261), and 67% (109/162) of diagnostic, procedural, and administrative/post-processing code types, respectively. A total of 215 new LOINC CT terms were created between versions 2.54 and 2.63, representing a 27% increase. Of these, 208 terms were created in response to our submissions. As new LOINC terms were created, the number of inconclusive terms decreased. Our sample included 963 exam terms with straightforward exam name attributes but for which there was no matching LOINC term at the time of mapping with version 2.54; these terms were initially classified as inconclusive. Eight-hundred-fifty-eight (89%) of these terms could be mapped to LOINC following new term creation and were therefore re-categorized as straightforward when mapping to LOINC version 2.63. An example of such a term is “CT Thoracic and Lumbar Spine WO contrast” which did not have a single clear matching LOINC term in version 2.54 but had a matching LOINC term (83310-3) at the time we re-assessed content coverage for version 2.63. The percentages of inconclusive terms, based on version 2.63 mapping, were 15% (1604/10 539), 9% (833/9116), 52% (662/1261), and 67% (109/162) of total, diagnostic, procedural and administrative/post-processing terms, respectively. Figure 1 depicts coverage of content type (1A and 1B) and concept token (1C and 1D) for all code types across all sites using LOINC versions 2.54 (2015) vs 2.63 (2017). Histogram 1A demonstrates improved content type coverage at many HIE sites between versions, with median rates increasing from 88.6% [IQR 78.1–92.2%] to 96.1% [IQR 90.7–97.9%] (Figure 1B) (P < .001). The median concept token coverage in 2015 was already high at 97.9% [IQR 94.7–99.2%], but this rate also statistically improved to 99.6% [IQR 99.0–99.9%] (Figure 1D), (P < .001). Figure 1. View largeDownload slide Content type and token analysis for all terms. Histograms of type (A) and token (C) coverage depict an increase between LOINC versions 2.54 and 2.63 [note the difference in coverage percentage scale in C]. Box and whiskers plots (B and D) between 2015 and 2017 demonstrate a statistically significant increase in median type coverage from 88.56% to 96.05% and median token coverage from 97.92% to 99.60% (P < .001). Figure 1. View largeDownload slide Content type and token analysis for all terms. Histograms of type (A) and token (C) coverage depict an increase between LOINC versions 2.54 and 2.63 [note the difference in coverage percentage scale in C]. Box and whiskers plots (B and D) between 2015 and 2017 demonstrate a statistically significant increase in median type coverage from 88.56% to 96.05% and median token coverage from 97.92% to 99.60% (P < .001). Figure 2 depicts the content type and token coverage for inconclusive terms across all Healthix sites using LOINC version 2.54 vs 2.63. The median concept type coverage for inconclusive terms in LOINC version 2.54 was 26.0% [IQR 13.5–40.0%]. New LOINC term creation significantly increased the median concept type coverage for inconclusive terms in LOINC version 2.63 to 57.1% [IQR 45.8–66.7%] (P < .001). Terms that remained inconclusive when mapping to LOINC version 2.63 included those with straightforward attributes but for which we were awaiting new term creation from LOINC (or response as to why a new LOINC term would not be created) as well as terms with ambiguous descriptive names, some of which had matching LOINC terms and some which did not. The improvement in concept type coverage of inconclusive terms from versions 2.54 to 2.63 is related to improvement in coverage of ambiguous terms following new LOINC creation. For example, the ambiguous term “CT CRANIAL/TEMPORAL W/O CONTRAST,” which required report review to ascertain that it represented a CT of the head and temporal bones without contrast, did not have a matching LOINC term in version 2.54, but had a matching LOINC term (83302-0) by the time of re-assessment with version 2.63. Additionally, the reduction in the overall number of inconclusive codes when mapping to versions 2.54 vs 2.63, due to fewer terms with straightforward attributes and without a LOINC match, helped improve inconclusive concept type coverage. Figure 2. View largeDownload slide Content type and token analysis for inconclusive terms. Histograms of type (A) and token (C) coverage depict an increase in inconclusive terms mapped. Box and whiskers plots of type and token coverage (B and D) between versions 2.54 and 2.63 demonstrate a statistically significant increase (P < .001). Figure 2. View largeDownload slide Content type and token analysis for inconclusive terms. Histograms of type (A) and token (C) coverage depict an increase in inconclusive terms mapped. Box and whiskers plots of type and token coverage (B and D) between versions 2.54 and 2.63 demonstrate a statistically significant increase (P < .001). A composite summary of content type and content token coverage of LOINC’s versions 2.54 (2015) and 2.61 (2017) for all code types within the Healthix HIE is presented in Table 1. Table 1. LOINC Content Type and Token Coverages Versions 2.54 vs 2.63 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 a In these categories, there was a significant improvement in content type and content token coverage for all code types and inconclusive terms types from LOINC versions 2.54 to 2.63 (P < .001, McNemar test). Table 1. LOINC Content Type and Token Coverages Versions 2.54 vs 2.63 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 a In these categories, there was a significant improvement in content type and content token coverage for all code types and inconclusive terms types from LOINC versions 2.54 to 2.63 (P < .001, McNemar test). For all code types based on exam purpose as well as for diagnostic and administrative/post-processing terms there was a significant improvement in both content type and token coverages between versions 2.54 and 2.63. This was significant for all code types based on ease of mapping as well as the subset of inconclusive terms (P < .001, McNemar test). There was no significant change in the content type or token coverage from versions 2.54 to 2.63 for procedural terms, including the inconclusive subsets. Analysis of LOINC versions 2.54 and 2.63 coverage of unique exam terms that account for the top 80%, 90%, 95%, and 99% of the exams performed across the sites combined is displayed in Table 2. Table 2. Analysis of LOINC versions 2.54 and 2.63 coverage for CT terms that account for the top 80%, 90%, 95%, and 99% of exams performed across Healthix HIE sites combined Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a a Significant improvement in LOINC term coverage between versions 2.54 and 2.63 for exams that account for the top 99% of exams performed (P < .001, McNemar test). Table 2. Analysis of LOINC versions 2.54 and 2.63 coverage for CT terms that account for the top 80%, 90%, 95%, and 99% of exams performed across Healthix HIE sites combined Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a a Significant improvement in LOINC term coverage between versions 2.54 and 2.63 for exams that account for the top 99% of exams performed (P < .001, McNemar test). Table 2 demonstrates that the top 80% of all CT exams performed throughout the HIE is represented by 15 exam standard terms (which are LOINC terms or, in instances in which there is no LOINC match, a temporary placeholder term). Similarly, the top 90%, 95%, and 99% of all CT exams performed are accounted, respectively, by 36, 71, and 200 terms. LOINC version 2.54 had terms for 14 of the 15 exams that accounted for the top 80% of exams performed. The lone term for which there was no matching LOINC term was “PET+CT Skull base to mid-thigh- W 18F-FDG IV.” By LOINC version 2.63, a new term had been created for this exam (81554-8) enabling all 15 terms accounting for the top 80% of exams performed to be mapped to LOINC terms. Similar improvements in LOINC coverage for terms accounting for the top 90%, 95%, and 99% of exams performed occurred between versions 2.54 and 2.63. The improvement in LOINC coverage for terms that accounted for the top 99% of exams performed was significant (P < .001, McNemar Chi-squared test). Similar to Vreeman et al.’s16 analysis of laboratory test volume in an HIE, we found a highly skewed distribution of exam frequency. A relatively small number of frequently performed exam types accounted for the vast majority of exam volume across all sites (Figure 3). Figure 3. View largeDownload slide Cumulative distribution curve showing the most frequently performed exams across sites over a three-year period in descending order. (Note that due to space constraints, the horizontal axis was truncated to the top 30 most frequently performed exam types.) Figure 3. View largeDownload slide Cumulative distribution curve showing the most frequently performed exams across sites over a three-year period in descending order. (Note that due to space constraints, the horizontal axis was truncated to the top 30 most frequently performed exam types.) Ambiguous term analysis Straightforward exam names typically contained unambiguous specification of imaging modality, anatomic region, presence of contrast, and timing and route of contrast administration (eg, CT neck WO&W contrast IV). For procedural terms, specific action (eg, biopsy) and anatomic object (eg, liver) are also associated with straightforward naming. In contrast, ambiguous exam names are often missing these distinctions. Ambiguous exam names from 100 randomly selected terms were examined and characterized by grouping them into categories, which are listed with relative frequencies in Table 3. The categories are non-mutually exclusive, so the number of exams totals >100. Table 3. Relative frequencies of ambiguous exam categories among 100 randomly chosen ambiguous exam terms Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Table 3. Relative frequencies of ambiguous exam categories among 100 randomly chosen ambiguous exam terms Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Inter-rater agreement analysis Overall measurement of interrater agreement in mapping a sample of 2000 local terms to LOINC by Cohen’s kappa was 0.89, indicating strong agreement in selection of matching LOINC terms with sample terms. In categorization of terms into straightforward vs inconclusive categories kappa inter-rater agreement was 0.51, indicating moderate agreement. DISCUSSION Overall, we found that LOINC had good coverage of the CT exams performed in our HIE, with improvement as new terms were created during the study period. LOINC version 2.54 had high composite content type and content token coverages of 83% and 95%, respectively, which significantly increased to 93% and 99%, respectively, for version 2.63 with 215 new LOINC CT terms. For inconclusive terms, content type and token coverage improvement were more dramatic (26% to 56% improvement of type coverage, and 26% to 73% improvement of token coverage). Our findings are consistent with earlier studies that demonstrated LOINC’s high coverage for radiology procedures from institutions participating in an HIE.17,18 At the time of this analysis, decisions from LOINC regarding 197 new requested terms were pending. We expect the concept type and token coverages to further improve following creation of additional LOINC terms. As new imaging protocols are created to address clinical questions, institutional exam order lists continue to grow. Because of this ongoing evolution and because some local terms (eg, administrative ones) are not within LOINC’s scope, LOINC’s content type and token coverage for a large HIE will likely never reach 100%. Yet, we have shown that LOINC’s good content coverage can continue to be expanded because of its open development model and straightforward method of end-user proposals for new term requests through RELMA. Our results demonstrate significantly improved content-dependent coverage through this method. As the steward of LOINC, Regenstrief values end-user feedback and was highly receptive to new term requests. The process involves careful scrutiny of each request to ensure model and scope fit, avoid duplication, and assess applicability to the broader healthcare community. Through this model and with an active user base, LOINC is positioned for ongoing development capable of maintaining steady content-dependent coverage. In this study, LOINC’s content type and token coverages for procedural CTs did not significantly improve from versions 2.54 to 2.63. The lack of improvement was due in part to our method for prioritizing the mapping of local terms. We first mapped diagnostic terms at each HIE site because they are more likely to be repeated. Thus, our initial submissions for new LOINC terms primarily included diagnostic terms. Requests for new procedural terms were submitted with later batches. As these requests are processed by Regenstrief, additional terms will be created and included in subsequent LOINC releases, at which point we expect content-dependent coverage for procedural terms to improve. It also should be noted that there was lower content-dependent coverage for administrative and post-processing terms compared with diagnostic and procedural terms. This is primarily because some administrative type terms are not in LOINC’s scope. On occasion, Regenstrief declines to create a new LOINC term from user requests. In our study, Regenstrief suggested mapping to an existing LOINC term in some cases rather than creating a new one, often because LOINC chooses not to model to the same granular specifications as some local terms employ. For example, for the proposed term “CT male pelvis WO contrast,” Regenstrief suggested mapping to the existing LOINC term “CT pelvis WO contrast” (30615-9) to avoid separate terms for male vs female pelvis. In these instances, we have thus far elected to accede to Regenstrief’s mapping recommendations. Another reason Regenstrief elected not to create a term nor suggest an existing term was when a requested term was too non-specific and/or out of LOINC’s scope. For example, Regenstrief declined to create a term for “CT exam of Head, Face, or Neck region” because such broad anatomic coverage was too non-specific and could specify an exam of any one of three regions. Understandably, LOINC’s modeling policy preserves anatomic specificity and also offers an “unspecified body region” variant, but does not model terms with either/or regions. For these instances, we have elected to convert a temporary placeholder term into a permanent HIE term, so that these types of exams can be captured in our prior CT alerting system. FUTURE CONSIDERATIONS We hope to expand our alerting system beyond CT to include all imaging modalities (eg, MRI, x-ray, etc.), which will necessitate mapping local terms for these modalities to LOINC. Once completed, we plan on conducting a similar analysis of LOINC’s concept coverage for other imaging modality terms, including an analysis of coverage improvement following new term creation. Following initial implementation of the CT mappings, we expect local sources to continually evolve their terms by adding terms for new imaging studies and modifying codes for existing terms due to health care system consolidation and implementation of new electronic medical record (EMR) or radiology information systems (RIS). Vreeman et al.19 reported that 2 years after mapping laboratory and radiology terms to LOINC in the Indiana Network for Patient Care, half as many new local terms were added as in the initial implementation. Successful implementation of our mapping will require a system, similar to that described by Vreeman et al.,19 to alert the HIE of new local terms where an exception browser is used to capture un-mapped terms and place them into a queue for review. The present study contained a limited expert review to characterize ambiguous terms into subcategories based on a random sample of 100 terms. While beyond the scope of this study, a more comprehensive analysis may find additional characteristics and features that could help inform strategies to map these terms more efficiently. LIMITATIONS We defined inconclusive terms as those that cannot be mapped to LOINC based solely on the exam name. By this definition, terms with relatively straightforward exam names but without a current LOINC match were included along with terms that have ambiguous descriptive names. We considered creating a separate category for these relatively straightforward terms but decided to include them because we judged it to be less subjective and less subject to variance. Mapping local terms to standard terminologies is a complex process that involves expert judgment. In this study we defined ambiguous terms as a subset of inconclusive terms with exam descriptions that required further investigation before mapping. Applying this definition is somewhat dependent on the mapper’s prior experience. For example, when presented with “CT Chest IELCAP wo contrast,” those familiar with the International Early Lung Cancer Action Program (I-ELCAP) would recognize this exam as a low-dose CT chest without contrast for lung cancer screening, likely map it to the LOINC term 79086-5 (CT Chest for screening WO contrast), and categorize the exam as straightforward.20 Someone unfamiliar with the I-ELCAP study or this acronym might categorize the exam as ambiguous, requiring research regarding I-ELCAP prior to mapping. Another mapper may categorize the exam as ambiguous, as LOINC does not include a model for “low dose” in its terms. (LOINC does not separate low-dose exams, as what is considered low dose presently may be considered standard dose in the future, and definition of “low dose” may differ with variability in radiation dosing techniques across sites.) Yet another person may argue that “screening” implies “low dose.” Fortunately, our kappa inter-rater agreement analysis demonstrated a moderate inter-rater agreement (κ = 0.51) for categorization of terms into straightforward and inconclusive categories, thereby showing that mappers with a similar background are likely to agree on term categorization. CONCLUSION LOINC provided excellent content-dependent coverage of CT exam terms across an HIE. In particular, LOINC’s high initial content token coverage (95%) and high coverage of frequently performed exams demonstrate that it can readily support our use case for an HIE-wide prior CT alerting system. Moreover, through LOINC’s relatively straightforward process of submitting proposals for new terms, LOINC is able to improve upon its content-dependent coverage. Additionally, through this process, LOINC can evolve in step with the radiology discipline and maintain high content-dependent coverage by developing new terms. FUNDING This project was supported by grant number 1R01LM012196-01 from the National Library of Medicine. The contents are solely the responsibility of the authors and do not necessarily represent the official views of the U.S. Department of Health and Human Services or any of its agencies. CONTRIBUTORS All authors [Paul Peng (PP), Anton Oscar Beitia (AOB), Daniel J. Vreeman (DJV), George T. Loo (GTL), Bradley N. Delman (BND), Frederick L. Thum (FLT), Tina Lowry (TL), and Jason S. Shapiro (JSS)] participated in conceptual design and experimental approach. JSS was primarily responsible for original conceptual design of the study. Data acquisition, data de-identification, and primary data cleaning were performed by TL. Final data cleaning was performed by AOB, GTL, and TL. AOB performed the mapping of proprietary HIE CT codes to LOINC, as described in the methods section of the manuscript. FTL mapped a random sample of 2000 proprietary HIE CT codes to LOINC to perform kappa inter-rater agreement analysis. Descriptive statistical analysis was performed by PP, AOB, and GTL. Study results were iteratively reviewed by all authors. Co-primary drafters of the manuscript were PP and AOB (50% each). All authors reviewed, made revisions, and approved the final draft of the manuscript. Additional expertise on LOINC and LOINC/RSNA terminologies were provided by DJV. Conflict of interest statement. DJV: Activities related to the present article: President of Blue Sky Premise, LLC, grants from National Institute of Biomedical Imaging and Bioengineering and National Library of Medicine. Activities not related to the present article: grants from U.S. Food and Drug Administration, National Center for Advancing Translational Sciences, bioMérieux, and Centers for Medicare & Medicaid Service, and National Institute of Diabetes & Digestive Disorders for development, maintenance, and distribution of LOINC. Other activities: disclosed no relevant relationships. The other co-authors have no competing interests to disclose. REFERENCES 1 Shapiro JS , Mostashari F , Hripcsak G , et al. . Using health information exchange to improve public health . Am J Public Health 2011 ; 101 4 : 616 – 23 . Google Scholar Crossref Search ADS PubMed 2 Slovis BH , Lowry T , Delman BN , et al. . Patient crossover and potentially avoidable repeat computed tomography exams across a health information exchange . J Am Med Inform Assoc 2017 ; 24 1 : 30 – 8 . Google Scholar Crossref Search ADS PubMed 3 Wang KC , Patel JB , Vyas B , et al. . Use of radiology procedure codes in health care: the need for standardization and structure . Radiographics 2017 ; 37 4 : 1099 – 110 . Google Scholar Crossref Search ADS PubMed 4 Wang KC. Standard lexicons, coding systems and ontologies for interoperability and semantic computation in imaging . J Digit Imaging 2018 ; doi: 10.1007/s10278-018-0069-8. 5 Langlotz CP. RadLex: a new method for indexing online educational materials . Radiographics 2006 ; 26 6 : 1595 – 7 . Google Scholar Crossref Search ADS PubMed 6 Vreeman DJ , McDonald CJ , Huff SM. LOINC®: a universal catalogue of individual clinical observations and uniform representation of enumerated collections . Int J Funct Inform Personal Med 2010 ; 3 4 : 273 – 91 . Google Scholar PubMed 7 Loinc.org . Regenstrief and the RSNA are working together to unify radiology procedures in LOINC and RadLex. 2016 . https://loinc.org/collaboration/rsna; http://www.webcitation.org/6sdhWWhYq Accessed March 23, 2018. 8 Beitia AO , Kuperman G , Delman BN , Shapiro JS. Assessing the performance of LOINC and RadLex for coverage of CT scans across three sites in a health information exchange . AMIA Annu Symp Proc 2013 ; 2013 : 94 – 102 . Google Scholar PubMed 9 Beitia AO , Lowry TL , Vreeman DJ , et al. . Constructing diagnostic CT exam lists for sites across an HIE. Poster session presented at: 2016 AMIA Annual Symposium; Nov 12–16, 2016 ; Chicago, IL. 10 Vreeman DJ , Abhyankar S , Wang KC , et al. . The LOINC RSNA radiology playbook—a unified terminology for radiology procedures . J Am Med Inform Assoc 2018 ; 25 7 : 885 – 93 . Google Scholar Crossref Search ADS PubMed 11 McDonald C , Huff S , Deckard J , et al. . Logical Observation Identifiers Names and Codes (LOINC®) Users’ Guide. Indianapolis, IN: Regenstrief Institute; 2017 . Annex - RadLex-LOINC radiology playbook user guide; Annex page 1-18. https://loinc.org/download/loinc-users-guide/ Accessed April 7, 2018. 12 Beitia AO , Lowry TL , Vreeman DJ , et al. . Mapping of HIE CT codes to LOINC- analysis of inconclusive codes and quantification of mapping times. Poster session presented at: 2018 AMIA Informatics Summit; Mar 12–15; 2018 ; San Francisco, CA. 13 Loinc.org . Submitting New Term Requests—LOINC. 2018 . https://loinc.org/submissions/new-terms/ Accessed May 23, 2018. 14 Loinc.org . LOINC Codes in Development for Next Release—LOINC. 2018 . https://loinc.org/prerelease/ Accessed May 23, 2018. 15 Cornet R , de Keizer NF , Abu-Hanna A. A framework for characterizing terminological systems . Methods Inf Med 2006 ; 45 03 : 253 – 66 . Google Scholar Crossref Search ADS PubMed 16 Vreeman DJ , Finnell JT , Overhage JM. A rationale for parsimonious laboratory term mapping by frequency . AMIA Annu Symp Proc 2007 ; 2007 : 771 – 5 . 17 Vreeman DJ , McDonald CJ. Automated mapping of local radiology terms to LOINC . AMIA Annu Symp Proc 2005 ; 769 – 73 . 18 Vreeman DJ , McDonald CJ. A comparison of intelligent mapper and document similarity scores for mapping local radiology terms to LOINC . AMIA Annu Symp Proc 2006 ; 809 – 13 . PubMed PMID: 17238453; PubMed Central PMCID: PMC1839677. 19 Vreeman DJ , Stark M , Tomashefski GL , Phillips DR , Dexter PR. Embracing change in a health information exchange . AMIA Annu Symp Proc 2008 ; 768 – 72 . PubMed PMID: 18999242; PubMed Central PMCID: PMC2656094. 20 International Early Lung Cancer Action Program (I-ELCAP). 2013 . http://www.ielcap.org/ Accessed April 7, 2018. © The Author(s) 2018. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of the American Medical Informatics Association Oxford University Press

Mapping of HIE CT terms to LOINC®: analysis of content-dependent coverage and coverage improvement through new term creation

Loading next page...
 
/lp/oxford-university-press/mapping-of-hie-ct-terms-to-loinc-analysis-of-content-dependent-dMXSJNZhgQ

References (13)

Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com
ISSN
1067-5027
eISSN
1527-974X
DOI
10.1093/jamia/ocy135
Publisher site
See Article on Publisher Site

Abstract

Abstract Objective We describe and evaluate the mapping of computerized tomography (CT) terms from 40 hospitals participating in a health information exchange (HIE) to a standard terminology. Methods Proprietary CT exam terms and corresponding exam frequency data were obtained from 40 participant HIE sites that transmitted radiology data to the HIE from January 2013 through October 2015. These terms were mapped to the Logical Observations Identifiers Names and Codes (LOINC®) terminology using the Regenstrief LOINC mapping assistant (RELMA) beginning in January 2016. Terms without initial LOINC match were submitted to LOINC as new term requests on an ongoing basis. After new LOINC terms were created, proprietary terms without an initial match were reviewed and mapped to these new LOINC terms where appropriate. Content type and token coverage were calculated for the LOINC version at the time of initial mapping (v2.54) and for the most recently released version at the time of our analysis (v2.63). Descriptive analysis was performed to assess for significant differences in content-dependent coverage between the 2 versions. Results LOINC’s content type and token coverages of HIE CT exam terms for version 2.54 were 83% and 95%, respectively. Two-hundred-fifteen new LOINC CT terms were created in the interval between the releases of version 2.54 and 2.63, and content type and token coverages, respectively, increased to 93% and 99% (P < .001). Conclusion LOINC’s content type coverage of proprietary CT terms across 40 HIE sites was 83% but improved significantly to 93% following new term creation. data integration and standardization, clinical decision support systems, computed tomography, health information exchange, radiation dosage, alerting systems INTRODUCTION Background and significance Over the last decade, information technology has made possible secure sharing of clinical data through health information exchanges (HIEs).1 This enables clinicians to obtain information about patients seeking care across multiple organizations, not only aiding clinical decision making but also avoiding repeat examinations that might be unnecessary. Previous work found that approximately 3% of patients in a regional health network in New York City had computed tomography (CT) exams performed at more than one location. Further, more than 50% of these patients had the exact same CT at more than one site.2 These crossover CT scans are potentially avoidable, and motivate a prior CT alerting system that would notify ordering physicians of previous CTs from other sites that do not otherwise share clinical data. Such a system does not currently exist, in part because most sites performing CTs use proprietary, site-specific radiology procedure terms that impede interoperability. Mapping these local terms to standardized terms is a foundational first step to a prior CT alert system.3,4 Until 2015, there were 2 predominate terminology standards for radiology orderables: (1) the RadLex™ Playbook (http://playbook.radlex.org) based on the Radiological Society of North America RadLex lexicon5 and (2) the Logical Observation Identifiers Names and Codes (LOINC®) standard (https://loinc.org), which was developed by the Regenstrief Institute.6 In December 2015, these 2 radiology terminology standards were harmonized into a single LOINC/RSNA Radiology Playbook standard.7 The unified terminology has become the preferred standard for radiology procedures. We previously evaluated the performance of LOINC and RadLex separately to characterize CT content coverage from 3 sites in an HIE.8 These standard terminologies together accounted for nearly 99% of the exams performed. Objective The objectives of this current study are: (1) to assess content type and content token (content-dependent) coverage of the LOINC/RSNA Radiology Playbook and mapping reliability for CTs across the entire Healthix HIE (a large HIE serving the New York metropolitan area), (2) to describe the process for requesting new LOINC terms, and (3) to analyze the specific contribution of newly created LOINC terms to CT exam term coverage. METHODS Data sources and retrieval Through the following method, comprehensive lists of CT exam codes and descriptive names were obtained for each of the 40 Healthix sites that transmitted radiology data from January 1, 2013, through October 31, 2015.9 Each site’s database of radiology exams performed over the study period was queried for unique radiology exam names and codes. To identify CTs, we extracted exam names that matched filters with strings likely to indicate CT (eg, CT, CAT, Computed Tomography). We then applied filters to exclude exams that contained the string “CT” but were non-CT exams (eg, Nuclear Medicine Octreotide Study). We further categorized these exams into diagnostic, procedural, and administrative/post-processing CTs with additional filters. Procedural CTs were identified through string searches such as “guidance” and “biopsy.” Administrative and post-processing CTs were identified with strings such as “3D,” “consult,” “outside,” and “multiplanar.” CT exams not matching these filters were placed in the diagnostic category as a default. The resultant list was manually reviewed by 1 author (AOB), a board-certified radiologist and informatician, to ensure correct categorization and that no residual non-CT exams were present. Standard terminologies The LOINC/RSNA Radiology Playbook, initially released in December 2015 as part of LOINC version 2.54, was developed through collaboration between Regenstrief Institute and the RSNA, and with support from the National Institute of Biomedical Imaging and Bioengineering (NIBIB).7,10 This no-cost product combines and unifies useful aspects of LOINC Radiology and the RadLex Playbook. The unified terminology provides a single standard for representing radiology procedures. In the new model, radiology procedures are identified with LOINC codes and given a structured name with attributes that are linked both to LOINC parts and concepts in RadLex clinical terms. It also contains mappings of previously used RadLex Playbook Identifiers (RPIDs), which are unique codes used in the RadLex Playbook to identify specific imaging studies, with equivalent LOINC codes. For example, for the imaging study “CT Head w/o contrast IV,” the LOINC/RSNA Playbook displays both the LOINC Code (30799-1) and RPID (RPID22). The initial release, published with LOINC version 2.54 (December 2015), was limited to CT procedures. The LOINC/RSNA Radiology Playbook was incrementally updated with twice-yearly LOINC releases that occur in June and December. Overall, there were 5 major LOINC releases during the study period. These releases expanded the unified content coverage to include MRI, x-ray, ultrasound, nuclear medicine, mammography, and other imaging modalities.11 The most recent release, version 2.63 (December 2017), represents the culmination of the harmonization process. With the unification complete, the standard is now jointly maintained on an ongoing basis by both organizations. Mapping Since LOINC codes serve as the primary identifiers in the LOINC/RSNA Radiology Playbook, we elected to map Healthix site CT codes to LOINC codes. From January 2016 to July 2017, we mapped proprietary CT exam terms to LOINC on a site-by-site basis. One author (AOB) mapped all terms, and a randomly selected subset of 2000 local Healthix CT terms was also mapped by another author (FTL) to evaluate inter-rater reliability. All mapping was performed on a term-by-term basis using the RELMA program. Categorization of terms CT exams at each site were categorized as (1) diagnostic, (2) procedural, or (3) post-processing/administrative by the filtering method described earlier. In addition, we categorized each term into 2 categories reflecting its ease of mapping to LOINC using RELMA: straightforward and inconclusive. Straightforward terms are those that could be mapped to LOINC without need for any further investigation. For the purposes of this analysis, an inconclusive term is defined as one that cannot be mapped to a LOINC term using RELMA based on its descriptive exam name alone. These inconclusive terms are a heterogeneous group that includes terms with straightforward exam descriptions but without a clear LOINC match, as well as terms with ambiguous exam descriptions that require further investigation. These ambiguous terms can be further subdivided into: those with and those without a LOINC match following further investigation. Prior work has shown that inconclusive local terms require significantly greater resources and effort to map.12 Therefore, we tracked the prevalence of straightforward and inconclusive terms, including the ambiguous sub-category of inconclusive terms. To identify distinguishing characteristics of straightforward and ambiguous inconclusive terms, we also analyzed semantic and syntactic differences in a random sample of 100 terms from each category. We chose to contrast straightforward terms with the ambiguous subcategory of inconclusive codes, as opposed to the entire category of inconclusive codes (which would include terms with straightforward exam name attributes without a LOINC match), because we believed that it would illuminate best practices for local naming conventions. Requesting new LOINC terms For exams without a matching LOINC term, we created temporary “placeholder” terms in our database and submitted periodic requests to the Regenstrief Institute for the creation of new LOINC terms. We followed the standard request procedure described on the LOINC website.13 Because we were using the RELMA program for mapping, we elected to use the built-in features for managing and uploading our LOINC requests. Typically, we sent requests containing a batch of approximately 50 new terms. Representatives from Regenstrief typically contacted us with any questions 4 to 6 weeks following submission. After 2 to 3 months, they would send a summary response spreadsheet, which included the newly created LOINC terms and those for which LOINC decided not to create a new term along with the rationale. Per LOINC’s policy, newly created terms that have passed their quality assurance process are published publicly on the LOINC website in advance of the next upcoming release.14 As new LOINC terms were created, we replaced temporary placeholder terms in our database, and the mappings at all sites were updated. Coverage analysis We used an established approach for evaluating the content of standard terminologies based on the following content-dependent metrics:15 Concept Type Coverage: percentage of unique test names (types) from a site that map to standardized terminology. For example, if 85 of 100 exam names at an institution map to LOINC terms, the concept type coverage would be 85%. Concept Token Coverage: percentage of actual exams performed (tokens) that map to standardized terminology. Similarly, if 95 000 of an institution’s 100 000 annual exams performed map to LOINC terms, the concept token coverage would be 95%. The distinction between these metrics is important. For example, although an institution may have low concept type coverage (eg, only 40% to 50% of its exam terms map to LOINC), it may still possess a high concept token coverage (> 90% of exams performed at that institution map to LOINC). Such a case indicates that a few terms can account for most exams performed. A high concept token coverage is particularly important for a future HIE-based prior CT alerting system. Concept type and token coverages were calculated for each site and for all 40 Healthix sites combined. For each site and all Healthix sites combined, we also performed content analysis based on ease of mapping (all terms vs inconclusive terms) and reason for exam (diagnostic, procedural, and post-processing terms). To assess LOINC’s incremental content-dependent coverage, we analyzed concept type and token prior to the creation of new LOINC terms based on our requests (LOINC version 2.54, December 2015) and after (LOINC version 2.63, December 2017). To characterize LOINC’s coverage in the context of our intended use case, we calculated the number of unique LOINC CT terms and temporary placeholder terms (for instances in which there was no appropriate LOINC term) that together account for the top 80%, 90%, 95%, and 99% of the total CT exams performed across the entire Healthix HIE. We then determined the number of those terms for which there was an appropriate LOINC term. We performed this analysis for LOINC versions before and after inclusion of new LOINC terms (versions 2.54 vs 2.63) to assess for incremental coverage. The rationale for this analysis was that even if all local terms could not be mapped to LOINC, the coverage could still be sufficient for practical applications if those mappings covered a high percentage of the actual exams performed. In our future repeat CT alerting system, we anticipate that frequently performed exams would be among those likely to be repeated across institutions. All statistical analyses were performed using the R Statistical Computing Package (https://www.r-project.org, Vienna, Austria). Median content type and token coverages across all sites were compared for LOINC versions 2.54 vs 2.63 using the Wilcoxon Signed-Rank test. Composite content type and token coverages for all sites combined as well as LOINC’s coverage of terms that accounted for the top 80%, 90%, 95%, and 99% of exams performed, were compared for LOINC versions 2.54 vs 2.63 using the McNemar Test. The Mount Sinai Institutional Review Board reviewed this study protocol and determined it is “not human research” and exempt from formal IRB review. RESULTS Coverage analysis There were 10 539 CT exam terms from 40 sites across 23 health systems. Of these, 9116 (86%) were diagnostic, 1261 (12%) were procedural, and 162 (< 2%) were post-processing/administrative. Using LOINC version 2.54, 23% (2462/10 539) of all exam terms were categorized as inconclusive. Inconclusive terms comprised 18% (1689/9116), 53% (664/1261), and 67% (109/162) of diagnostic, procedural, and administrative/post-processing code types, respectively. A total of 215 new LOINC CT terms were created between versions 2.54 and 2.63, representing a 27% increase. Of these, 208 terms were created in response to our submissions. As new LOINC terms were created, the number of inconclusive terms decreased. Our sample included 963 exam terms with straightforward exam name attributes but for which there was no matching LOINC term at the time of mapping with version 2.54; these terms were initially classified as inconclusive. Eight-hundred-fifty-eight (89%) of these terms could be mapped to LOINC following new term creation and were therefore re-categorized as straightforward when mapping to LOINC version 2.63. An example of such a term is “CT Thoracic and Lumbar Spine WO contrast” which did not have a single clear matching LOINC term in version 2.54 but had a matching LOINC term (83310-3) at the time we re-assessed content coverage for version 2.63. The percentages of inconclusive terms, based on version 2.63 mapping, were 15% (1604/10 539), 9% (833/9116), 52% (662/1261), and 67% (109/162) of total, diagnostic, procedural and administrative/post-processing terms, respectively. Figure 1 depicts coverage of content type (1A and 1B) and concept token (1C and 1D) for all code types across all sites using LOINC versions 2.54 (2015) vs 2.63 (2017). Histogram 1A demonstrates improved content type coverage at many HIE sites between versions, with median rates increasing from 88.6% [IQR 78.1–92.2%] to 96.1% [IQR 90.7–97.9%] (Figure 1B) (P < .001). The median concept token coverage in 2015 was already high at 97.9% [IQR 94.7–99.2%], but this rate also statistically improved to 99.6% [IQR 99.0–99.9%] (Figure 1D), (P < .001). Figure 1. View largeDownload slide Content type and token analysis for all terms. Histograms of type (A) and token (C) coverage depict an increase between LOINC versions 2.54 and 2.63 [note the difference in coverage percentage scale in C]. Box and whiskers plots (B and D) between 2015 and 2017 demonstrate a statistically significant increase in median type coverage from 88.56% to 96.05% and median token coverage from 97.92% to 99.60% (P < .001). Figure 1. View largeDownload slide Content type and token analysis for all terms. Histograms of type (A) and token (C) coverage depict an increase between LOINC versions 2.54 and 2.63 [note the difference in coverage percentage scale in C]. Box and whiskers plots (B and D) between 2015 and 2017 demonstrate a statistically significant increase in median type coverage from 88.56% to 96.05% and median token coverage from 97.92% to 99.60% (P < .001). Figure 2 depicts the content type and token coverage for inconclusive terms across all Healthix sites using LOINC version 2.54 vs 2.63. The median concept type coverage for inconclusive terms in LOINC version 2.54 was 26.0% [IQR 13.5–40.0%]. New LOINC term creation significantly increased the median concept type coverage for inconclusive terms in LOINC version 2.63 to 57.1% [IQR 45.8–66.7%] (P < .001). Terms that remained inconclusive when mapping to LOINC version 2.63 included those with straightforward attributes but for which we were awaiting new term creation from LOINC (or response as to why a new LOINC term would not be created) as well as terms with ambiguous descriptive names, some of which had matching LOINC terms and some which did not. The improvement in concept type coverage of inconclusive terms from versions 2.54 to 2.63 is related to improvement in coverage of ambiguous terms following new LOINC creation. For example, the ambiguous term “CT CRANIAL/TEMPORAL W/O CONTRAST,” which required report review to ascertain that it represented a CT of the head and temporal bones without contrast, did not have a matching LOINC term in version 2.54, but had a matching LOINC term (83302-0) by the time of re-assessment with version 2.63. Additionally, the reduction in the overall number of inconclusive codes when mapping to versions 2.54 vs 2.63, due to fewer terms with straightforward attributes and without a LOINC match, helped improve inconclusive concept type coverage. Figure 2. View largeDownload slide Content type and token analysis for inconclusive terms. Histograms of type (A) and token (C) coverage depict an increase in inconclusive terms mapped. Box and whiskers plots of type and token coverage (B and D) between versions 2.54 and 2.63 demonstrate a statistically significant increase (P < .001). Figure 2. View largeDownload slide Content type and token analysis for inconclusive terms. Histograms of type (A) and token (C) coverage depict an increase in inconclusive terms mapped. Box and whiskers plots of type and token coverage (B and D) between versions 2.54 and 2.63 demonstrate a statistically significant increase (P < .001). A composite summary of content type and content token coverage of LOINC’s versions 2.54 (2015) and 2.61 (2017) for all code types within the Healthix HIE is presented in Table 1. Table 1. LOINC Content Type and Token Coverages Versions 2.54 vs 2.63 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 a In these categories, there was a significant improvement in content type and content token coverage for all code types and inconclusive terms types from LOINC versions 2.54 to 2.63 (P < .001, McNemar test). Table 1. LOINC Content Type and Token Coverages Versions 2.54 vs 2.63 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 Term type based on ease of mapping Term type based on exam purpose (diagnostic, procedural, post-proc/admin) Totala Diagnostica Procedural Admin/ Post-processinga Concept Type Coverage v2.54 Total 0.83 0.86 0.64 0.34 Inconclusive 0.26 0.25 0.32 0.02 v2.63 Total 0.93 0.98 0.65 0.43 Inconclusive 0.56 0.81 0.33 0.15 Concept Token Coverage v2.54 Total 0.95 0.94 0.72 0.67 Inconclusive 0.26 0.27 0.37 0.03 v2.63 Total 0.99 0.97 0.72 0.69 Inconclusive 0.73 0.88 0.37 0.08 a In these categories, there was a significant improvement in content type and content token coverage for all code types and inconclusive terms types from LOINC versions 2.54 to 2.63 (P < .001, McNemar test). For all code types based on exam purpose as well as for diagnostic and administrative/post-processing terms there was a significant improvement in both content type and token coverages between versions 2.54 and 2.63. This was significant for all code types based on ease of mapping as well as the subset of inconclusive terms (P < .001, McNemar test). There was no significant change in the content type or token coverage from versions 2.54 to 2.63 for procedural terms, including the inconclusive subsets. Analysis of LOINC versions 2.54 and 2.63 coverage of unique exam terms that account for the top 80%, 90%, 95%, and 99% of the exams performed across the sites combined is displayed in Table 2. Table 2. Analysis of LOINC versions 2.54 and 2.63 coverage for CT terms that account for the top 80%, 90%, 95%, and 99% of exams performed across Healthix HIE sites combined Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a a Significant improvement in LOINC term coverage between versions 2.54 and 2.63 for exams that account for the top 99% of exams performed (P < .001, McNemar test). Table 2. Analysis of LOINC versions 2.54 and 2.63 coverage for CT terms that account for the top 80%, 90%, 95%, and 99% of exams performed across Healthix HIE sites combined Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a Top “X” % by frequency of the total CT exams performed across the entire Healthix HIE 80% 90% 95% 99% Number of LOINC and placeholder terms that together account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 71 200 Number of LOINC terms (version 2.54) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 14 33 62 151a Number of LOINC terms (version 2.63) among those terms that account for top 80%, 90%, 95%, and 99% of total exams performed. 15 36 68 179a a Significant improvement in LOINC term coverage between versions 2.54 and 2.63 for exams that account for the top 99% of exams performed (P < .001, McNemar test). Table 2 demonstrates that the top 80% of all CT exams performed throughout the HIE is represented by 15 exam standard terms (which are LOINC terms or, in instances in which there is no LOINC match, a temporary placeholder term). Similarly, the top 90%, 95%, and 99% of all CT exams performed are accounted, respectively, by 36, 71, and 200 terms. LOINC version 2.54 had terms for 14 of the 15 exams that accounted for the top 80% of exams performed. The lone term for which there was no matching LOINC term was “PET+CT Skull base to mid-thigh- W 18F-FDG IV.” By LOINC version 2.63, a new term had been created for this exam (81554-8) enabling all 15 terms accounting for the top 80% of exams performed to be mapped to LOINC terms. Similar improvements in LOINC coverage for terms accounting for the top 90%, 95%, and 99% of exams performed occurred between versions 2.54 and 2.63. The improvement in LOINC coverage for terms that accounted for the top 99% of exams performed was significant (P < .001, McNemar Chi-squared test). Similar to Vreeman et al.’s16 analysis of laboratory test volume in an HIE, we found a highly skewed distribution of exam frequency. A relatively small number of frequently performed exam types accounted for the vast majority of exam volume across all sites (Figure 3). Figure 3. View largeDownload slide Cumulative distribution curve showing the most frequently performed exams across sites over a three-year period in descending order. (Note that due to space constraints, the horizontal axis was truncated to the top 30 most frequently performed exam types.) Figure 3. View largeDownload slide Cumulative distribution curve showing the most frequently performed exams across sites over a three-year period in descending order. (Note that due to space constraints, the horizontal axis was truncated to the top 30 most frequently performed exam types.) Ambiguous term analysis Straightforward exam names typically contained unambiguous specification of imaging modality, anatomic region, presence of contrast, and timing and route of contrast administration (eg, CT neck WO&W contrast IV). For procedural terms, specific action (eg, biopsy) and anatomic object (eg, liver) are also associated with straightforward naming. In contrast, ambiguous exam names are often missing these distinctions. Ambiguous exam names from 100 randomly selected terms were examined and characterized by grouping them into categories, which are listed with relative frequencies in Table 3. The categories are non-mutually exclusive, so the number of exams totals >100. Table 3. Relative frequencies of ambiguous exam categories among 100 randomly chosen ambiguous exam terms Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Table 3. Relative frequencies of ambiguous exam categories among 100 randomly chosen ambiguous exam terms Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Description Example Frequency in Sample of 100 Exam name lacking enough detail to map without further investigation CT Abdomen/Pelvis for Abdominal Aortic Aneurysm. Uncertain from exam name if this maps to CT Abdomen/Pelvis with IV contrast or to CT Abdomen/Pelvis without and with IV contrast 31 Anatomic focus in exam descriptive name is non-specific or ambiguous CT orbit/sella/ear 29 Exam name contains term with no meaning outside a specific facility Code T Head CT 39 Exams with name attributes outside the LOINC model scope, including consultative and administrative type exam names CT outside read (consultative study), CT ambulatory (administrative study) 12 Exam name contains multiple imaging guided procedures grouped together under one term CT guided needle placement, biopsy, or aspiration 5 Exam name includes a highly specific reason for exam CT Angiography of Chest/Abdomen/Pelvis for transcatheter aortic valve replacement (TAVR) 4 Exam name contains a non-standard or extremely specific imaging location and performed with a low frequency (< 10 exams/ 3-year period) such that it can be argued that the exam be mapped to a standard less granular term CT Left Abdomen/Pelvis map to CT Abdomen/Pelvis (44115-4) or CT Left atrial appendage with IV contrast map to CT Heart w IV contrast (79089-9) 4 Exam name contains a very specific maneuver or, for a CT guided procedure, a very specific approach and performed with a low frequency such that it can be argued that mapping should be to a less granular term CT Left shoulder with anteversion map to CT left shoulder (36064-4) or CT guided pelvic abscess drainage with transrectal approach map to CT Guidance for drainage of abscess of Pelvis (42286-5) 2 Exam name contains multiple regions imaged and single anatomic focus where the anatomic focus is limited to only one of the regions imaged CT Abdomen/Pelvis Liver W IV contrast 2 Inter-rater agreement analysis Overall measurement of interrater agreement in mapping a sample of 2000 local terms to LOINC by Cohen’s kappa was 0.89, indicating strong agreement in selection of matching LOINC terms with sample terms. In categorization of terms into straightforward vs inconclusive categories kappa inter-rater agreement was 0.51, indicating moderate agreement. DISCUSSION Overall, we found that LOINC had good coverage of the CT exams performed in our HIE, with improvement as new terms were created during the study period. LOINC version 2.54 had high composite content type and content token coverages of 83% and 95%, respectively, which significantly increased to 93% and 99%, respectively, for version 2.63 with 215 new LOINC CT terms. For inconclusive terms, content type and token coverage improvement were more dramatic (26% to 56% improvement of type coverage, and 26% to 73% improvement of token coverage). Our findings are consistent with earlier studies that demonstrated LOINC’s high coverage for radiology procedures from institutions participating in an HIE.17,18 At the time of this analysis, decisions from LOINC regarding 197 new requested terms were pending. We expect the concept type and token coverages to further improve following creation of additional LOINC terms. As new imaging protocols are created to address clinical questions, institutional exam order lists continue to grow. Because of this ongoing evolution and because some local terms (eg, administrative ones) are not within LOINC’s scope, LOINC’s content type and token coverage for a large HIE will likely never reach 100%. Yet, we have shown that LOINC’s good content coverage can continue to be expanded because of its open development model and straightforward method of end-user proposals for new term requests through RELMA. Our results demonstrate significantly improved content-dependent coverage through this method. As the steward of LOINC, Regenstrief values end-user feedback and was highly receptive to new term requests. The process involves careful scrutiny of each request to ensure model and scope fit, avoid duplication, and assess applicability to the broader healthcare community. Through this model and with an active user base, LOINC is positioned for ongoing development capable of maintaining steady content-dependent coverage. In this study, LOINC’s content type and token coverages for procedural CTs did not significantly improve from versions 2.54 to 2.63. The lack of improvement was due in part to our method for prioritizing the mapping of local terms. We first mapped diagnostic terms at each HIE site because they are more likely to be repeated. Thus, our initial submissions for new LOINC terms primarily included diagnostic terms. Requests for new procedural terms were submitted with later batches. As these requests are processed by Regenstrief, additional terms will be created and included in subsequent LOINC releases, at which point we expect content-dependent coverage for procedural terms to improve. It also should be noted that there was lower content-dependent coverage for administrative and post-processing terms compared with diagnostic and procedural terms. This is primarily because some administrative type terms are not in LOINC’s scope. On occasion, Regenstrief declines to create a new LOINC term from user requests. In our study, Regenstrief suggested mapping to an existing LOINC term in some cases rather than creating a new one, often because LOINC chooses not to model to the same granular specifications as some local terms employ. For example, for the proposed term “CT male pelvis WO contrast,” Regenstrief suggested mapping to the existing LOINC term “CT pelvis WO contrast” (30615-9) to avoid separate terms for male vs female pelvis. In these instances, we have thus far elected to accede to Regenstrief’s mapping recommendations. Another reason Regenstrief elected not to create a term nor suggest an existing term was when a requested term was too non-specific and/or out of LOINC’s scope. For example, Regenstrief declined to create a term for “CT exam of Head, Face, or Neck region” because such broad anatomic coverage was too non-specific and could specify an exam of any one of three regions. Understandably, LOINC’s modeling policy preserves anatomic specificity and also offers an “unspecified body region” variant, but does not model terms with either/or regions. For these instances, we have elected to convert a temporary placeholder term into a permanent HIE term, so that these types of exams can be captured in our prior CT alerting system. FUTURE CONSIDERATIONS We hope to expand our alerting system beyond CT to include all imaging modalities (eg, MRI, x-ray, etc.), which will necessitate mapping local terms for these modalities to LOINC. Once completed, we plan on conducting a similar analysis of LOINC’s concept coverage for other imaging modality terms, including an analysis of coverage improvement following new term creation. Following initial implementation of the CT mappings, we expect local sources to continually evolve their terms by adding terms for new imaging studies and modifying codes for existing terms due to health care system consolidation and implementation of new electronic medical record (EMR) or radiology information systems (RIS). Vreeman et al.19 reported that 2 years after mapping laboratory and radiology terms to LOINC in the Indiana Network for Patient Care, half as many new local terms were added as in the initial implementation. Successful implementation of our mapping will require a system, similar to that described by Vreeman et al.,19 to alert the HIE of new local terms where an exception browser is used to capture un-mapped terms and place them into a queue for review. The present study contained a limited expert review to characterize ambiguous terms into subcategories based on a random sample of 100 terms. While beyond the scope of this study, a more comprehensive analysis may find additional characteristics and features that could help inform strategies to map these terms more efficiently. LIMITATIONS We defined inconclusive terms as those that cannot be mapped to LOINC based solely on the exam name. By this definition, terms with relatively straightforward exam names but without a current LOINC match were included along with terms that have ambiguous descriptive names. We considered creating a separate category for these relatively straightforward terms but decided to include them because we judged it to be less subjective and less subject to variance. Mapping local terms to standard terminologies is a complex process that involves expert judgment. In this study we defined ambiguous terms as a subset of inconclusive terms with exam descriptions that required further investigation before mapping. Applying this definition is somewhat dependent on the mapper’s prior experience. For example, when presented with “CT Chest IELCAP wo contrast,” those familiar with the International Early Lung Cancer Action Program (I-ELCAP) would recognize this exam as a low-dose CT chest without contrast for lung cancer screening, likely map it to the LOINC term 79086-5 (CT Chest for screening WO contrast), and categorize the exam as straightforward.20 Someone unfamiliar with the I-ELCAP study or this acronym might categorize the exam as ambiguous, requiring research regarding I-ELCAP prior to mapping. Another mapper may categorize the exam as ambiguous, as LOINC does not include a model for “low dose” in its terms. (LOINC does not separate low-dose exams, as what is considered low dose presently may be considered standard dose in the future, and definition of “low dose” may differ with variability in radiation dosing techniques across sites.) Yet another person may argue that “screening” implies “low dose.” Fortunately, our kappa inter-rater agreement analysis demonstrated a moderate inter-rater agreement (κ = 0.51) for categorization of terms into straightforward and inconclusive categories, thereby showing that mappers with a similar background are likely to agree on term categorization. CONCLUSION LOINC provided excellent content-dependent coverage of CT exam terms across an HIE. In particular, LOINC’s high initial content token coverage (95%) and high coverage of frequently performed exams demonstrate that it can readily support our use case for an HIE-wide prior CT alerting system. Moreover, through LOINC’s relatively straightforward process of submitting proposals for new terms, LOINC is able to improve upon its content-dependent coverage. Additionally, through this process, LOINC can evolve in step with the radiology discipline and maintain high content-dependent coverage by developing new terms. FUNDING This project was supported by grant number 1R01LM012196-01 from the National Library of Medicine. The contents are solely the responsibility of the authors and do not necessarily represent the official views of the U.S. Department of Health and Human Services or any of its agencies. CONTRIBUTORS All authors [Paul Peng (PP), Anton Oscar Beitia (AOB), Daniel J. Vreeman (DJV), George T. Loo (GTL), Bradley N. Delman (BND), Frederick L. Thum (FLT), Tina Lowry (TL), and Jason S. Shapiro (JSS)] participated in conceptual design and experimental approach. JSS was primarily responsible for original conceptual design of the study. Data acquisition, data de-identification, and primary data cleaning were performed by TL. Final data cleaning was performed by AOB, GTL, and TL. AOB performed the mapping of proprietary HIE CT codes to LOINC, as described in the methods section of the manuscript. FTL mapped a random sample of 2000 proprietary HIE CT codes to LOINC to perform kappa inter-rater agreement analysis. Descriptive statistical analysis was performed by PP, AOB, and GTL. Study results were iteratively reviewed by all authors. Co-primary drafters of the manuscript were PP and AOB (50% each). All authors reviewed, made revisions, and approved the final draft of the manuscript. Additional expertise on LOINC and LOINC/RSNA terminologies were provided by DJV. Conflict of interest statement. DJV: Activities related to the present article: President of Blue Sky Premise, LLC, grants from National Institute of Biomedical Imaging and Bioengineering and National Library of Medicine. Activities not related to the present article: grants from U.S. Food and Drug Administration, National Center for Advancing Translational Sciences, bioMérieux, and Centers for Medicare & Medicaid Service, and National Institute of Diabetes & Digestive Disorders for development, maintenance, and distribution of LOINC. Other activities: disclosed no relevant relationships. The other co-authors have no competing interests to disclose. REFERENCES 1 Shapiro JS , Mostashari F , Hripcsak G , et al. . Using health information exchange to improve public health . Am J Public Health 2011 ; 101 4 : 616 – 23 . Google Scholar Crossref Search ADS PubMed 2 Slovis BH , Lowry T , Delman BN , et al. . Patient crossover and potentially avoidable repeat computed tomography exams across a health information exchange . J Am Med Inform Assoc 2017 ; 24 1 : 30 – 8 . Google Scholar Crossref Search ADS PubMed 3 Wang KC , Patel JB , Vyas B , et al. . Use of radiology procedure codes in health care: the need for standardization and structure . Radiographics 2017 ; 37 4 : 1099 – 110 . Google Scholar Crossref Search ADS PubMed 4 Wang KC. Standard lexicons, coding systems and ontologies for interoperability and semantic computation in imaging . J Digit Imaging 2018 ; doi: 10.1007/s10278-018-0069-8. 5 Langlotz CP. RadLex: a new method for indexing online educational materials . Radiographics 2006 ; 26 6 : 1595 – 7 . Google Scholar Crossref Search ADS PubMed 6 Vreeman DJ , McDonald CJ , Huff SM. LOINC®: a universal catalogue of individual clinical observations and uniform representation of enumerated collections . Int J Funct Inform Personal Med 2010 ; 3 4 : 273 – 91 . Google Scholar PubMed 7 Loinc.org . Regenstrief and the RSNA are working together to unify radiology procedures in LOINC and RadLex. 2016 . https://loinc.org/collaboration/rsna; http://www.webcitation.org/6sdhWWhYq Accessed March 23, 2018. 8 Beitia AO , Kuperman G , Delman BN , Shapiro JS. Assessing the performance of LOINC and RadLex for coverage of CT scans across three sites in a health information exchange . AMIA Annu Symp Proc 2013 ; 2013 : 94 – 102 . Google Scholar PubMed 9 Beitia AO , Lowry TL , Vreeman DJ , et al. . Constructing diagnostic CT exam lists for sites across an HIE. Poster session presented at: 2016 AMIA Annual Symposium; Nov 12–16, 2016 ; Chicago, IL. 10 Vreeman DJ , Abhyankar S , Wang KC , et al. . The LOINC RSNA radiology playbook—a unified terminology for radiology procedures . J Am Med Inform Assoc 2018 ; 25 7 : 885 – 93 . Google Scholar Crossref Search ADS PubMed 11 McDonald C , Huff S , Deckard J , et al. . Logical Observation Identifiers Names and Codes (LOINC®) Users’ Guide. Indianapolis, IN: Regenstrief Institute; 2017 . Annex - RadLex-LOINC radiology playbook user guide; Annex page 1-18. https://loinc.org/download/loinc-users-guide/ Accessed April 7, 2018. 12 Beitia AO , Lowry TL , Vreeman DJ , et al. . Mapping of HIE CT codes to LOINC- analysis of inconclusive codes and quantification of mapping times. Poster session presented at: 2018 AMIA Informatics Summit; Mar 12–15; 2018 ; San Francisco, CA. 13 Loinc.org . Submitting New Term Requests—LOINC. 2018 . https://loinc.org/submissions/new-terms/ Accessed May 23, 2018. 14 Loinc.org . LOINC Codes in Development for Next Release—LOINC. 2018 . https://loinc.org/prerelease/ Accessed May 23, 2018. 15 Cornet R , de Keizer NF , Abu-Hanna A. A framework for characterizing terminological systems . Methods Inf Med 2006 ; 45 03 : 253 – 66 . Google Scholar Crossref Search ADS PubMed 16 Vreeman DJ , Finnell JT , Overhage JM. A rationale for parsimonious laboratory term mapping by frequency . AMIA Annu Symp Proc 2007 ; 2007 : 771 – 5 . 17 Vreeman DJ , McDonald CJ. Automated mapping of local radiology terms to LOINC . AMIA Annu Symp Proc 2005 ; 769 – 73 . 18 Vreeman DJ , McDonald CJ. A comparison of intelligent mapper and document similarity scores for mapping local radiology terms to LOINC . AMIA Annu Symp Proc 2006 ; 809 – 13 . PubMed PMID: 17238453; PubMed Central PMCID: PMC1839677. 19 Vreeman DJ , Stark M , Tomashefski GL , Phillips DR , Dexter PR. Embracing change in a health information exchange . AMIA Annu Symp Proc 2008 ; 768 – 72 . PubMed PMID: 18999242; PubMed Central PMCID: PMC2656094. 20 International Early Lung Cancer Action Program (I-ELCAP). 2013 . http://www.ielcap.org/ Accessed April 7, 2018. © The Author(s) 2018. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Journal

Journal of the American Medical Informatics AssociationOxford University Press

Published: Jan 1, 2019

There are no references for this article.