Evaluating the American College of Emergency Physicians’ clinical policy awareness and trustworthiness among academic physicians
Original Article

Evaluating the American College of Emergency Physicians’ clinical policy awareness and trustworthiness among academic physicians

Ian T. Ferguson1, Matthew Singh2, Suneel Upadhye3, Jason Haukoos4,5,6, Christopher R. Carpenter7

1Washington University School of Medicine, St. Louis, MO, USA; 2Michigan State University Corewell Health, Grand Rapids, MI, USA; 3Division of Emergency Medicine, Department of Medicine, McMaster University, Hamilton, ON, Canada; 4Department of Emergency Medicine, Denver Health, Denver, CO, USA; 5Department of Emergency Medicine, University of Colorado School of Medicine, Aurora, CO, USA; 6Department of Epidemiology, Colorado School of Public Health, Aurora, CO, USA; 7Department of Emergency Medicine, Mayo Clinic, Rochester, MN, USA

Contributions: (I) Conception and design: IT Ferguson, CR Carpenter; (II) Administrative support: CR Carpenter; (III) Provision of study materials or patients: IT Ferguson, M Singh, S Upadhye, CR Carpenter; (IV) Collection and assembly of data: All authors; (V) Data analysis and interpretation: IT Ferguson, CR Carpenter; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

Correspondence to: Ian T. Ferguson, MD, MPH. Washington University School of Medicine, 1015 Bowles Ave., St. Louis, MO 63026, USA. Email: itfergus@gmail.com.

Background: Clinical policies have been developed by the American College of Emergency Physicians (ACEP) since 1990. Awareness and trustworthiness of these clinical policies among emergency medicine clinicians are largely unknown. The National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards (NEATS) instrument is a validated tool to evaluate trustworthiness based on the Institute of Medicine standards. In this observational study, we assessed the familiarity and trustworthiness of ACEP clinical policies using NEATS among resident and attending physicians at four different sites across the United States (US) and Canada.

Methods: Residents and attending physicians from three academic sites in the US and one in Canada were invited to complete a seven-question survey to assess familiarity with ACEP clinical policies, guideline appraisal tools, and to inquire how often clinical policies are used in daily patient care. They were then randomly assigned two ACEP clinical policies to evaluate using NEATS. We calculated intraclass correlation coefficients (ICCs) for clinical policies that had at least four raters to assess reliability between different participants rating the same clinical policy. We also compared available NEATS evaluations from ACEP to the evaluations in our cohort.

Results: Thirty-seven participants completed the background survey, of which 14 were attending physicians and 23 were resident physicians at various stages in training. Fifty-nine percent of participants were unaware or only slightly familiar with the ACEP clinical policies prior to participating in the study, and none identified as being “extremely” familiar. Sixty percent of participants identified as either rarely or very rarely using ACEP clinical policies in daily patient care. Twenty-four of the 37 participants also completed the NEATS instrument (from three of the four recruited sites). The ACEP clinical policies with the highest trustworthiness measured by NEATS are the policies on opioids, non-ST-segment elevation myocardial infarction (NSTEMI), and community-acquired pneumonia, while the lowest-ranked policies were headache, appendicitis, and sedation. The ICC for all three was ≥0.7, indicating moderate to high reliability (in the three clinical policies with ≥4 raters). There was no significant difference in scores between attendings and residents (71 vs. 76, P=0.13).

Conclusions: We evaluated ACEP clinical policy awareness and usage among a cohort of resident and attending physicians in the US and Canada. Although limited by resident and attending participation (58% of clinical policies had two or fewer raters), we observed generally low awareness and usage of ACEP clinical policies in practice. Trustworthiness measured by NEATS was highest for the clinical policy on opioids and lowest for sedation. The reliability between raters was generally high for the three clinical policies with at least four raters, though this study was limited by a small number of raters, and the reliability of the other clinical policies was not possible.

Keywords: Health care evaluation mechanisms; guidelines as topic; health care surveys; delivery of health care


Received: 23 August 2025; Accepted: 16 March 2026; Published online: 23 March 2026.

doi: 10.21037/jphe-25-44


Highlight box

Key findings

• In a multi-site cohort of residents and attending emergency physicians, we observed generally low awareness and usage of American College of Emergency Physicians (ACEP) clinical policies in practice, as well as low awareness of guideline evaluation tools.

What is known and what is new?

• Despite the longevity of ACEP clinical policies, scant research has evaluated emergency clinician awareness, trustworthiness, or application of these clinical policies. ACEP clinical policies have been externally reviewed by a single other study and found weakest in the domains of applicability.

• To our knowledge, this is the first study evaluating ACEP clinical policy awareness and trustworthiness in a cohort of both resident and attending physicians across multiple sites.

What is the implication, and what should change now?

• These results offer insights into areas to improve education at the resident (and even attending) level to improve awareness about ACEP clinical policies, as well as the highest yield opportunities to improve trustworthiness of those documents among emergency clinicians. Specifically, future emergency medicine clinical practice guideline authorship groups should prioritize meaningful patient/public engagement and optimize dissemination strategies at all training levels to maximize awareness and uptake of recommendations into clinical practice.


Introduction

Background

Translation of clinical practice guidelines (CPGs) into practice requires awareness and trust by the clinician implementing the recommendations at the bedside (1). Prior to the publication of the Guidelines for Reasonable and Appropriate Care in the Emergency Department (GRACE; https://www.saem.org/publications/grace) guidelines by the Society for Academic Emergency Medicine (SAEM), the only emergency medicine (EM) organization to regularly publish clinical guidance has been the American College of Emergency Physicians (ACEP). Though designated “clinical policies” rather than CPGs, ACEP has released and frequently updated 49 clinical policies since 1990. Currently, there are 22 active clinical policies (the rest have since been retired by ACEP) (2). Clinical policy guidance by the largest EM professional organization provides a resource for the emergency physician to bridge knowledge gaps and reduce practice variability, improve patient safety, and mitigate medicolegal risk (3). However, there is a paucity of research regarding awareness, acceptance, applicability, or actionability of ACEP’s clinical policies, and researchers have not previously explored the translation of specific recommendations into practice among EM clinicians. One prior external evaluation concluded that while ACEP’s clinical policies were rated highly on a measure of “overall assessment” (a composite of subjective ratings and measures from a standardized instrument), they were weakest in the domain of applicability, likely contributing to poor implementation and diminished trust (4).

The National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards (NEATS) instrument evaluates CPG adherence to trustworthiness standards (5). This 15-item tool was originally developed to evaluate the eight quality standards outlined by the Institute of Medicine (IOM; now National Academy of Medicine) for CPGs in the National Guideline Clearinghouse (6). The final NEATS instrument comprised of relevant items from the previously developed Appraisal of Guidelines, Research, and Evaluation (AGREE) II tool, along with 4 new items identified as important from the IOM standards without existing correlates in AGREE II (7). NEATS underwent field testing and external evaluation by independent stakeholders assessing whether the included items reflect the IOM standards and have demonstrated high interrater reliability (5).

NEATS has been used to evaluate ACEP clinical policies through the Emergency Care Research Institute’s (ECRI) guideline repository called the ECRI Guidelines Trust (8). This repository evaluates CPGs fulfilling inclusion criteria with a Transparency and Rigor Using Standards of Trustworthiness (TRUST) scorecard based on the NEATS Instrument. However, the results of these evaluations and the methods by which the ACEP clinical policies were evaluated are not publicly available.

Rationale and knowledge gap

Whether EM clinicians have trust in or awareness of ACEP clinical policies is unknown and critically important for implementation at the bedside (9,10). Further research into clinician appraisal of the ACEP clinical policies across a wide training spectrum could help inform notable opportunities to improve CPG awareness, evaluation, and implementation among both trainees and attending physicians in EM.

Objective

The goals of this investigation are to assess the familiarity with ACEP clinical policies among a cohort of resident and attending EM physicians, as well as to evaluate the trustworthiness of the clinical polices using the NEATS instrument. We also compare the NEATS scores from the participants in this study to the NEATS scores for available clinical policies with existing TRUST scores from ECRI.


Methods

Study design

This was a multicenter cross-sectional observational study across three sites in the United States (US) (Washington University in St. Louis, Michigan State University, University of Colorado) and one site in Canada (McMaster University). Two of the sites have 4-year residency programs (Washington University in St. Louis, University of Colorado); one site has a 3-year residency program (Corewell-Grand Rapids/Michigan State); and one site has a 5-year residency program (McMaster). Participants included residents and attending EM physicians from all four sites.

Measures

We created a seven-item survey to assess familiarity with the ACEP clinical policies, guideline appraisal, and other CPGs (see appendix available at https://cdn.amegroups.cn/static/public/jphe-25-44-1.pdf). This survey was created to explore general awareness of the ACEP clinical policies and guideline appraisal tools, including NEATS. The questions were pilot tested with a cohort of resident and attending physicians for clarity and face validity. As a result of the testing, we modified the survey to clarify the wording on questions 2 and 3, as well as expanded the scale to include an associated probability reference for each ordinal level (e.g., “always” corresponds to a probability of 99/100, while “almost never” corresponds to 1/100). The second part of the study asked each participant to rate assigned clinical policies using the NEATS instrument (see Appendix 1). Within the NEATS instrument, there are 15 total items evaluated across 12 sections (see Appendix 1). Twelve of these items are rated on a Likert scale of 1 to 5 (1 lowest adherence), and three items are categorical “yes”, “no”, and “unknown”. The three categorical items were coded as a score of 5 if the rating was “yes”, 0 if the rating was “no” or “unknown”. The range of the total possible score is 12–75, with 12 reflecting the lowest possible adherence and 75 reflecting the highest possible adherence.

Prior work has shown that NEATS inter-rater reliability is optimized with five raters per guideline (11). Our goal was to recruit seven raters per clinical policy to allow for attrition. The ACEP clinical policies were allocated into 10 participant groups with temporal balancing so that no group will have two policies released within 4 years of one another. Each participant was randomly assigned to a group and asked to complete both clinical policies within the group. Therefore, the total estimated sample size was 70 participants with the aim of a balanced spread across all training levels. This sample size weighs the need for having multiple raters per clinical policy with the feasibility of asking busy clinicians to complete the tool, with the NEATS assessment of each clinical policy expected to require 2–3 hours on average to complete, based on the co-authors’ personal experience assessing ACEP clinical policies using the NEATS instrument (5). The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was approved by the Institutional Review Board of Washington University in St. Louis (IRB No. 202111153) and informed consent was obtained from all individual participants.

Procedures

The background survey and NEATS instrument were transposed to an electronic form using Research Electronic Data Capture (REDCap). We randomly selected residents from each level of training and all faculty at each academic site to send study invitations. We planned to recruit 19 total participants (residents and faculty) at Washington University and 17 participants each at University of Colorado, McMaster, and Michigan State University. Three residents per class were selected at 4-year training programs (Washington University and University of Colorado), four residents per class at 3-year training programs (Michigan State University), and two residents per class at 5-year training programs (McMaster). The study authors were not eligible to participate in this study, so they did not complete the survey or NEATS assessments.

After randomly selecting the residents and faculty, site principal investigators (PIs) sent emails with the standardized participation letter to the selected individuals. If the respondent agreed to accept, the background survey was automatically sent to the participant via REDCap. Following completion of the background survey, introductory instructions on using the NEATS, along with a training video, were then automatically sent. Subsequently, after completion of the training material, the NEATS tool and the assigned clinical policy, as an attached PDF, were then sent. Each participant was asked to complete the clinical policy evaluation within 2 weeks of receipt of the REDCap form. Completion of the first clinical policy evaluation prompted REDCap to send the second and final clinical policy for completion.

Clinical policies were split into 10 groups of two clinical policies, distributed so that each group had temporally balanced clinical policies at the time of publication (the clinical policy on seizure was duplicated in groups 2 and 7). Completion of the background survey resulted in the participant being automatically assigned an integer from 1 to 10 corresponding to the clinical policy groups. These numbers were assigned based on the order of acceptance from the consent form in REDCap. To help ensure even distribution of the clinical policy groups across training levels (to be able to compare the NEATS assessment on the same clinical policy across training levels), we sent REDCap invitations in sequential order by training level at each site.

An optional slide show presentation was also included with the NEATS evaluation, and it was requested that this be completed prior to evaluation. The slide show reviewed the NEATS tool for one of the clinical polices (“Blunt Abdominal Trauma”, see appendix available at https://cdn.amegroups.cn/static/public/jphe-25-44-2.pdf) (12). This clinical policy was subsequently not assigned for further review. Therefore, we evaluated a total of nineteen published clinical policies available at the time of recruitment 1/2022 to 7/2022, and excluding the clinical policies on severe agitation [2023], asymptomatic hypertension [2025], acute ischemic stroke [2023] which were unavailable during our recruitment period (13-15). Of note, the policy on acute ischemic stroke [2023] is distinct from the included policy on tPA in ischemic stroke [2015] (15,16). All data were stored securely in REDCap. Participants received an honorarium of $30.00 after completion of the survey and both clinical policy assessments [the funding was from a grant from the Missouri Chapter of ACEP (MOCEP); the funder had no role in the design, implementation, or analysis of this investigation].

Analysis

We qualitatively evaluate the results of the pre-NEATS survey and compare the results by resident class and between residents and attendings. We report summary statistics [mean, standard error of the mean (SEM), median if skewed] for the survey and Student’s t-test assuming unequal variance for comparing means. We calculate a total NEATS score and a standardized score [(score − minimum score)/(maximum score − minimum score)] × 100 as recommended by a similar instrument tool (AGREE II) (7). To assess reliability and provide an overall estimate of variance, we planned on using generalizability theory to calculate G-coefficients (17). However, given sample size limitations, this method was not feasible, and instead, we report summary statistics for aggregate scores and intraclass correlation coefficients (ICCs) for the subset of clinical policies with at least four raters using a one-way effects model with absolute agreement (18). A P value of <0.05 for all hypothesis testing was considered statistically significant.

We compared NEATS scores from our cohort with TRUST scores from five clinical policies [venous thromboembolism, non-ST-segment elevation myocardial infarction (NSTEMI), headache, opioids, and community-acquired pneumonia] (19-23). While a total of nine policies with TRUST scores were made available to us, one [acute ischemic stroke (16)] was not evaluated in our study because it was not released during enrollment, and three [acute heart failure (24), appendicitis (25), and traumatic brain injury (TBI) (26)] had TRUST scorecards only for the updated policy version which was not evaluated in our study due to a release date after our enrollment period. There were two TRUST scores for each of the clinical policies. We averaged the score for each domain to create a single TRUST score for each available policy. We were not provided with the details of the methodology for these TRUST scores, and it is uncertain whether each TRUST score was obtained by a single rater or several different raters. R statistical software version 4.5 was used for data management, analysis, and figures (27).


Results

Thirty-seven participants completed the background survey, 24 completed at least one NEATS assessment, and 23 completed two NEATS assessments, despite eventually sending the study invitation to all residents and attendings from the four sites (after not fulfilling our prespecified sample size with each successive wave of recruitment specified in the methods). Of the 24 participants who completed at least one NEATS assessment, 9 (37.5%) were attending physicians with a median of 9 years in practice since residency training (Table 1). Most participants (62.5%) were from a single site (Washington University), while one site (University of Colorado) had zero participants.

Table 1

Participants by training level (resident PGY I to IV) and median years since training with IQR

Training level Background survey NEATS assessment
PGY
   I 6 5
   II 7 4
   III 7 4
   IV 3 2
Attending 14 9
Years since residency 14 [11.8] 9 [14]
Total 37 24

Data are presented as number or median [IQR]. IQR, interquartile range; PGY, resident program year.

Background survey results

Fifty-nine percent of participants were either “not at all” or only “slightly” familiar with the ACEP clinical policies (Table 2). No participant identified being “extremely” familiar with the clinical policies. Twenty-two participants (59%) reported they did not know the class of evidence (I, II, and III) designation for the ACEP clinical policies; 14 of the participants (38%) reported they knew but gave incorrect answers; only 1 participant (3%) accurately reported the classes of evidence. Three participants (8%) correctly identified the levels of recommendation designation for ACEP clinical policies (A, B, and C), while 26 participants (70%) reported not knowing the recommendation levels and 8 participants (22%) reported incorrect nomenclature. One participant had previously used NEATS, while 86% had not heard of NEATS prior to participating in the study. Most participants reported rarely (described as a frequency of 10–20 times every 100 shifts) incorporating ACEP clinical policies in daily patient care (Table 2). No participants reported using the clinical policies “always” or “nearly always”. A higher proportion of participants use other clinical policies more frequently in daily patient care (Table 2).

Table 2

Survey responses for three questions with scale and respondent count

Survey question Scale Count [%]
Familiarity ACEP Not at all 9 [24]
Slightly 13 [35]
Somewhat 10 [27]
Moderately 5 [14]
Extremely 0 [0]
Incorporation ACEP Very rarely (1/100) 6 [16]
Rarely (10–20/100) 16 [43]
Occasionally (40–50/100) 10 [27]
Frequently (60–70/100) 3 [8]
Very frequently (70–80/100) 2 [5]
Nearly always (9/10) 0 [0]
Always (99/100) 0 [0]
Incorporation other Very rarely (1/100) 0 [0]
Rarely (10–20/100) 4 [11]
Occasionally (40–50/100) 10 [27]
Frequently (60–70/100) 12 [32]
Very frequently (70–80/100) 5 [14]
Nearly always (9/10) 6 [16]
Always (99/100) 0 [0]

Survey results from the 37 participants. Familiarity ACEP: how familiar are you with ACEP clinical policies? Incorporation ACEP: how often do you incorporate ACEP clinical policies, when relevant, into your daily patient care? Incorporation other: how often do you incorporate other clinical practice guidelines, when relevant, into your daily patient care? ACEP, American College of Emergency Physicians.

NEATS results

The total average score for all rated policies was 59/75 (±7.3) with a range of 45 to 68.5. When averaged across raters, the highest rated policy was opioids, and the lowest rated policy was procedural sedation (Table 3) (22,28). Resident physicians tended to score policies higher than attending physicians, though the difference in mean standardized score was not significant (76 vs. 71, P=0.13). The NEATS item that ranked the lowest was “patient and public perspectives” and the highest ranked item was “methodologist involvement” (Figure 1). Comparing the NEATS scores to the TRUST scores from ECRI, we observed that TRUST scores for three of five clinical policies ranked lower than the total NEATS score in our cohort (Figure 2). The ICC calculated from a one-way random effects model with absolute agreement for clinical policies with at least four raters all showed moderate reliability {seizure, 0.81 [95% confidence interval (CI): 0.55–1]; TBI, 0.77 (95% CI: 0.53–1); community-acquired pneumonia, 0.80 (95% CI: 0.56–1)} (17,23,26,29).

Table 3

Mean NEATS score for each clinical policy with number of raters, SD, range (maximum possible score of 75), and standardized score

Policy Number of raters Mean ± SD Min Max Standardized score
Opioids 2 68.5±3.5 66 71 89.7
NSTEMI 3 66.3±1.5 65 68 86.2
CAP 4 63.8±5.5 59 71 82.1
CHF 2 62.5±12 54 71 80.1
Psych 2 62.0±12.7 53 71 79.3
CO 2 61.5±0.7 61 62 78.6
TAD 2 61.0±5.7 57 65 77.8
TPA 2 60.0±17 48 72 76.1
STEMI 3 59.7±6.1 53 65 75.7
HTN 3 59.3±5 54 64 75.1
Pregnancy 3 59.0±5 54 64 74.6
Mild TBI 4 57.0±3.4 52 59 71.4
Seizure 4 55.2±4.2 49 58 68.7
Fever 3 55.0±8.2 48 64 68.2
VTE 2 55.0±4.2 52 58 68.2
TIA 2 54.0±7.1 49 59 66.7
Headache 1 50.0 50 50 60.3
Appendicitis 1 49.0 49 49 58.7
Sedation 2 45.0±0 45 45 52.3

, standardized score = (score − minimum score)/(maximum score − minimum score) × 100. CAP, community-acquired pneumonia; CHF, congestive heart failure; CO, carbon monoxide; HTN, hypertension; Max, maximum; Min, minimum; NSTEMI, non-ST-segment elevation myocardial infarction; Psych, psychiatric patient; SD, standard deviation; STEMI, ST-segment elevation myocardial infarction; TAD, thoracic aortic dissection; TBI, traumatic brain injury; TIA, transient ischemic attack; TPA, tissue plasminogen activator; VTE, venous thromboembolic disease.

Figure 1 Heatmap showing distribution of scores by individual NEATS item. CAP, community-acquired pneumonia; CHF, congestive heart failure; CO, carbon monoxide; HTN, hypertension; NEATS, National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards; NSTEMI, non-ST-segment elevation myocardial infarction; Psych, psychiatric patient; STEMI, ST-segment elevation myocardial infarction; TAD, thoracic aortic dissection; TBI, traumatic brain injury; TIA, transient ischemic attack; TPA, tissue plasminogen activator; VTE, venous thromboembolic disease.
Figure 2 Bar plot showing the average total NEATS score for each clinical policy in our cohort, the average total NEATS score for the TRUST scorecards, and the average NEATS score for attending and resident physicians in our cohort. CAP: 3 residents and 1 attending; headache: 1 resident; NSTEMI: 2 residents and 1 attending; opioids: 1 resident and 1 attending; VTE: 1 resident and 1 attending. CAP, community-acquired pneumonia; NEATS, National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards; NSTEMI, non-ST-segment elevation myocardial infarction; TRUST, Transparency and Rigor Using Standards of Trustworthiness; VTE, venous thromboembolism.

Discussion

Key findings

In a three-site cohort of residents and attending emergency department (ED) physicians, we observed low awareness and usage of ACEP clinical policies in practice, as well as low awareness of guideline evaluation tools such as NEATS and AGREE II. More participants identified as using other clinical policies more often than ACEP clinical policies at the bedside. The ACEP clinical policies with the highest trustworthiness measured by NEATS are the policies on opioids, NSTEMI, and community-acquired pneumonia, while the lowest-ranked policies were headache, appendicitis, and sedation. The incorporation of patient and public perspectives was identified as the domain most consistently deficient among all evaluated ACEP clinical policies (30).

Strengths and limitations

The strength of this investigation is the novelty of evaluating ACEP clinical policies in a three-site cohort across a wide range of clinical training levels. The principal limitation was our sample size. We had originally planned for approximately 70 participants across all four sites, with a reasonable distribution among different levels of training. Unfortunately, we were only able to obtain completed NEATS scores for 23 participants across three of the four sites. Having such a limited number of raters per policy (all but three having <4 raters and some with only 1 rater) precluded us from performing our preplanned reliability analysis using generalizability theory in lieu of a much less robust analysis evaluating ICC on a limited subset of the policies. Our recruitment issues probably reflect the hours-long time commitment entailed in reviewing the NEATS training module, completing the survey, reviewing two clinical policies, and then assessing those same clinical policies with NEATS. Future researchers in this field will probably require sufficient resources to reimburse participants’ 2–3 hours of time with more than $30. We were also unable to compare all the available TRUST scorecards to our study NEATS scores because the TRUST scorecards were only available for the updated clinical policies, not those that our study evaluated. Furthermore, there are inherent limitations comparing the TRUST scorecards to our study data given the lack of available methodology in how the TRUST scorecards were obtained. Our participant population also included a limited geographic sampling of EM physicians and did not include physicians from community or non-academic sites, which might render different results. Finally, though we attempted to standardize training with the optional slide show, this was not routinely utilized nor was training assessed systemically across our study population, potentially limiting the fidelity and accuracy of NEATS scores in our population.

Explanation of findings and implications

Clinical guidance in the form of clinical policies or CPGs represents an invaluable asset to every medical specialty to synthesize available evidence, engage stakeholders to improve quality, and help define medical standards (3). Albeit in a small cohort, we observed low awareness and usage of ACEP clinical policies. While this study did not explore reasons for low awareness, a possible explanation is the paucity of level A recommendations: ACEP clinical policies are largely based on either expert opinion or on lower classes of evidence (31). Indeed, for the 19 clinical policies evaluated in this cohort, there are only four level A recommendations out of 69 total clinical questions (5.8%). However, if generalizable, the findings in this study do raise concerns about a lack of awareness regarding the methodology of ACEP clinical policies with only 3% correctly knowing the classes of evidence designation and 8% knowing the levels of recommendation designation of A (>1 class I evidence or multiple class II studies), B (incorporate at least one class II evidence study or a strong consensus from class III evidence), and C (based on either only class III evidence or expert consensus). While clinical policies do not by themselves dictate standard of care, nor does correctly following their recommendations guarantee exculpation, applying clinical policies incorrectly (for instance, by not understanding the evidence basis or the strength of recommendations) increases medicolegal exposure for the EM clinician (32,33). More importantly, there is the potential to inappropriately apply clinical policy recommendations with unwarranted certainty if there is a lack of understanding of how the recommendations are derived. Given this potential for a generalized lack of awareness of clinical policy methodology, residency programs should consider formal training in guideline evaluation to improve awareness of both the ACEP clinical policies, as well as evaluation tools such as NEATS, AGREE II, and GRADE to improve the trustworthiness of EM guidelines (5,7,34).

In contrast to the lack of awareness of how the clinical policies are reported, the NEATS items evaluating criteria on methodological rigor, summarization of evidence, and rating the quality of evidence were all highly ranked, compared to the lowest rated criterion of “patient and public perspectives”. This standard evaluates whether the CPG explicitly and transparently seeks the perspectives of both patients and patient surrogates and advocates (30). The uniformly low rating of this domain, along with the similar criterion of incorporating external review from a full spectrum of stakeholders, reflects an opportunity to improve the trustworthiness of future iterations of ACEP clinical policies. The clinical policies seem to be viewed as methodologically rigorous but possibly lacking in relevance to the patient’s perspective. This finding is consistent with prior work evaluating ACEP clinical policies using AGREE II, which observed low ratings in applicability and stakeholder involvement (4). While we did not identify a significant difference between the ECRI TRUST ratings of four clinical policies and the ratings in our cohort, increasing transparency in CPG evaluation with a clear plan for updating existing policies reflects another area of possible improvement for ACEP clinical policy development (35).


Conclusions

In a multi-site cohort of resident and attending ED physicians, we observed generally low awareness and usage of ACEP clinical policies in practice, as well as low awareness of guideline evaluation tools such as NEATS and AGREE II. The ACEP clinical policies with the highest trustworthiness measured by NEATS are the policies on opioids, NSTEMI, and community-acquired pneumonia, while the lowest-ranked policies were headache, appendicitis, and sedation. The incorporation of patient and public perspectives was identified as the domain most consistently deficient among all evaluated ACEP clinical policies. Future EM guidelines authorship groups should prioritize meaningful patient/public engagement during all phases of clinical policy development and optimize dissemination strategies at all training levels to maximize awareness and bedside uptake of clinical policy recommendations.


Acknowledgments

The authors appreciate the contribution of Kaeli Vandertulip from the American College of Emergency Physicians, who facilitated communication with the Emergency Care Research Institute. This work was previously presented at the American College of Emergency Physicians (ACEP) 2023 Research Forum in Philadelphia (accepted as an oral presentation).


Footnote

Data Sharing Statement: Available at https://jphe.amegroups.com/article/view/10.21037/jphe-25-44/dss

Peer Review File: Available at https://jphe.amegroups.com/article/view/10.21037/jphe-25-44/prf

Funding: Research reported in this publication was supported, in part, by a resident research grant from the Missouri Branch of the American College of Emergency Physicians.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jphe.amegroups.com/article/view/10.21037/jphe-25-44/coif). C.R.C. is Associate Editor for the Journal of the American Geriatrics Society, leads the Society for Academic Emergency Medicine Guidelines for Reasonable and Appropriate Care in the Emergency Department (GRACE) Committee, and serves on the American College of Emergency Physicians Clinical Policy Committee. C.R.C. is also Chair of the American College of Emergency Physicians’ Geriatric Emergency Department Accreditation Advisory Board, serves on the Clinician-Scientist Transdisciplinary Aging Research Leadership Core, and is an editor for the American College of Emergency Physicians’ MyEMCert Program. S.U. is a guidelines Methodologist for the SAEM GRACE group. The other authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. The study was approved by the Institutional Review Board of Washington University in St. Louis (IRB No. 202111153) and informed consent was obtained from all individual participants.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Gaddis GM, Greenwald P, Huckson S. Toward improved implementation of evidence-based clinical algorithms: clinical practice guidelines, clinical decision rules, and clinical pathways. Acad Emerg Med 2007;14:1015-22. [Crossref] [PubMed]
  2. ACEP. Clinical Policies. Accessed 12/3/2020. Available online: https://www.acep.org/patient-care/clinical-policies/
  3. Carpenter CR, Bellolio MF, Upadhye S, et al. Navigating uncertainty with GRACE: Society for Academic Emergency Medicine’s guidelines for reasonable and appropriate care in the emergency department. Acad Emerg Med 2021;28:821-5. [Crossref] [PubMed]
  4. Zupon A, Rothenberg C, Couturier K, et al. An appraisal of emergency medicine clinical practice guidelines: Do we agree? Int J Clin Pract 2019;73:e13289. [Crossref] [PubMed]
  5. Jue JJ, Cunningham S, Lohr K, et al. Developing and Testing the Agency for Healthcare Research and Quality’s National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards (NEATS) Instrument. Ann Intern Med 2019;170:480-7. [Crossref] [PubMed]
  6. Institute of Medicine (US) Committee on Standards for Developing Trustworthy Clinical Practice Guidelines. Clinical Practice Guidelines We Can Trust. Graham R, Mancher M, Miller Wolman D, et al. editors. Washington: National Academies Press (US); 2011.
  7. Brouwers MC, Kho ME, Browman GP, et al. AGREE II: advancing guideline development, reporting and evaluation in health care. J Clin Epidemiol 2010;63:1308-11. [Crossref] [PubMed]
  8. ECRI Guidelines Trust. Accessed 3/26/2024. Available online: https://guidelines.ecri.org/
  9. Davis DA, Taylor-Vaisey A. Translating guidelines into practice. A systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. CMAJ 1997;157:408-16.
  10. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999;282:1458-65. [Crossref] [PubMed]
  11. Upadhye S, Yan J, Ghate G, et al. Rating the Trustworthiness of Emergency Medicine Clinical Practice Guidelines with the NEATS Instrument. Can J Emerg Med 2021;23:S37.
  12. Diercks DB, Mehrotra A, Nazarian DJ, et al. Clinical policy: critical issues in the evaluation of adult patients presenting to the emergency department with acute blunt abdominal trauma. Ann Emerg Med 2011;57:387-404. [Crossref] [PubMed]
  13. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Severe Agitation. Clinical Policy: Critical Issues in the Evaluation and Management of Adult Out-of-Hospital or Emergency Department Patients Presenting With Severe Agitation: Approved by the ACEP Board of Directors, October 6, 2023. Ann Emerg Med 2024;83:e1-e30. [Crossref] [PubMed]
  14. Wolf SJ, Lo B, Shih RD, et al. Clinical policy: critical issues in the evaluation and management of adult patients in the emergency department with asymptomatic elevated blood pressure. Ann Emerg Med 2013;62:59-68. [Crossref] [PubMed]
  15. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Acute Ischemic Stroke. Clinical Policy: Critical Issues in the Management of Adult Patients Presenting to the Emergency Department With Acute Ischemic Stroke. Ann Emerg Med 2023;82:e17-64. [Crossref] [PubMed]
  16. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Use of Intravenous tPA for Ischemic Stroke; Brown MD, Burton JH, et al. Clinical Policy: Use of Intravenous Tissue Plasminogen Activator for the Management of Acute Ischemic Stroke in the Emergency Department. Ann Emerg Med 2015;66:322-333.e31.
  17. Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. 5th ed. Oxford: Oxford University Press; 2015.
  18. Fisher RA. Intraclass correlations and the analysis of variance. In: Fisher RA. Statistical Methods for Research Workers. New Delhi: Cosmo Publications for Genesis Pub; 1925:187-210.
  19. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Thromboembolic Disease. Clinical Policy: Critical Issues in the Evaluation and Management of Adult Patients Presenting to the Emergency Department With Suspected Acute Venous Thromboembolic Disease. Ann Emerg Med 2018;71:e59-e109. [Crossref] [PubMed]
  20. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Suspected Non-ST-Elevation Acute Coronary Syndromes. Clinical Policy: Critical Issues in the Evaluation and Management of Emergency Department Patients With Suspected Non-ST-Elevation Acute Coronary Syndromes. Ann Emerg Med 2018;72:e65-e106. [Crossref] [PubMed]
  21. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Acute Headache. Clinical Policy: Critical Issues in the Evaluation and Management of Adult Patients Presenting to the Emergency Department With Acute Headache. Ann Emerg Med 2019;74:e41-74. [Crossref] [PubMed]
  22. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Opioids. Clinical Policy: Critical Issues Related to Opioids in Adult Patients Presenting to the Emergency Department. Ann Emerg Med 2020;76:e13-39. [Crossref] [PubMed]
  23. American College of Emergency Physicians Clinical Policies Subcommittee (Writing Committee) on Community-Acquired Pneumonia. Clinical Policy: Critical Issues in the Management of Adult Patients Presenting to the Emergency Department With Community-Acquired Pneumonia. Ann Emerg Med 2021;77:e1-e57. [Crossref] [PubMed]
  24. Silvers SM, Howell JM, Kosowsky JM, et al. Clinical policy: Critical issues in the evaluation and management of adult patients presenting to the emergency department with acute heart failure syndromes. Ann Emerg Med 2007;49:627-69. [Crossref] [PubMed]
  25. Howell JM, Eddy OL, Lukens TW, et al. Clinical policy: Critical issues in the evaluation and management of emergency department patients with suspected appendicitis. Ann Emerg Med 2010;55:71-116. [Crossref] [PubMed]
  26. Jagoda AS, Bazarian JJ, Bruns JJ Jr, et al. Clinical policy: neuroimaging and decisionmaking in adult mild traumatic brain injury in the acute setting. Ann Emerg Med 2008;52:714-48. [Crossref] [PubMed]
  27. R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2025. Available online: https://www.r-project.org/
  28. Godwin SA, Burton JH, Gerardo CJ, et al. Clinical policy: procedural sedation and analgesia in the emergency department. Ann Emerg Med 2014;63:247-58.e18. [Crossref] [PubMed]
  29. Huff JS, Melnick ER, Tomaszewski CA, et al. Clinical policy: critical issues in the evaluation and management of adult patients presenting to the emergency department with seizures. Ann Emerg Med 2014;63:437-47.e15. [Crossref] [PubMed]
  30. Carpenter CR, Morrill DM, Sundberg E, et al. Nothing about me without me: GRACE-fully partnering with patients to derive clinical practice guidelines. Acad Emerg Med 2023;30:603-5. [Crossref] [PubMed]
  31. Venkatesh AK, Savage D, Sandefur B, et al. Systematic review of emergency medicine clinical practice guidelines: Implications for research and policy. PLoS One 2017;12:e0178456. [Crossref] [PubMed]
  32. Tzeel A. Clinical practice guidelines and medical malpractice. Physician Exec 2002;28:36-9.
  33. San Miguel CE. Clinical Practice Guidelines and Medical Malpractice Risk. Emerg Med Clin North Am 2025;43:155-61. [Crossref] [PubMed]
  34. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336:924-6. [Crossref] [PubMed]
  35. Sanabria AJ, Pardo-Hernandez H, Ballesteros M, et al. The UpPriority tool was developed to guide the prioritization of clinical guideline questions for updating. J Clin Epidemiol 2020;126:80-92. [Crossref] [PubMed]
doi: 10.21037/jphe-25-44
Cite this article as: Ferguson IT, Singh M, Upadhye S, Haukoos J, Carpenter CR. Evaluating the American College of Emergency Physicians’ clinical policy awareness and trustworthiness among academic physicians. J Public Health Emerg 2026;10:1.

Download Citation