Other bulletins in this series include:

Breast Surgery

Friday 14 December 2018

Evaluation of a Measurement System to Assess ICU Team Performance*



by Dietz, Aaron S.; Salas, Eduardo; Pronovost, Peter J.; Jentsch, Florian; Wyskiel, Rhonda; Mendez-Tellez, Pedro Alejandro; Dwyer, Cynthia; Rosen, Michael A.  


Objective: Measuring teamwork is essential in critical care, but limited observational measurement systems exist for this environment. The objective of this study was to evaluate the reliability and validity of a behavioral marker system for measuring teamwork in ICUs.
 Design: Instances of teamwork were observed by two raters for three tasks: multidisciplinary rounds, nurse-to-nurse handoffs, and retrospective videos of medical students and instructors performing simulated codes. Intraclass correlation coefficients were calculated to assess interrater reliability. Generalizability theory was applied to estimate systematic sources of variance for the three observed team tasks that were associated with instances of teamwork, rater effects, competency effects, and task effects.
Setting: A 15-bed surgical ICU at a large academic hospital.
Subjects: One hundred thirty-eight instances of teamwork were observed. Specifically, we observed 88 multidisciplinary rounds, 25 nurse-to-nurse handoffs, and 25 simulated code exercises. Interventions: No intervention was conducted for this study.
Measurements and Main Results: Rater reliability for each overall task ranged from good to excellent correlation (intraclass correlation coefficient, 0.64–0.81), although there were seven cases where reliability was fair and one case where it was poor for specific competencies. Findings from generalizability studies provided evidence that the marker system dependably distinguished among teamwork competencies, providing evidence of construct validity.
Conclusions: Teamwork in critical care is complex, thereby complicating the judgment of behaviors. The marker system exhibited great potential for differentiating competencies, but findings also revealed that more context specific guidance may be needed to improve rater reliability.

No comments: