Abstract
Medical errors are the third leading cause of death, right after cancer and heart diseases, causing around 250,000 deaths every year. Approximately 40% of these errors happen in the operating room (OR) and 50% of the resulting complications are avoidable. Most errors are related to communication. Effective communication among team members during cardiac surgery operations is paramount for ensuring patient safety and successful outcomes. Clear and concise communication among the members of the surgical team facilitates the coordination of complex tasks, enhances situational awareness, and allows for swift responses to unexpected developments [1].
Behavioral data analytics leverage multimodal data and artificial intelligence (AI) methods and can offer valuable insights into communication cues during cardiac surgery. By integrating various data sources such as audio, video, and physiology, AI algorithms can capture team members’ tone of voice, language patterns, and body language, ultimately distinguishing between positive and negative interprofessional communication behaviors. Speech signals can serve as indicators of both positive and negative interprofessional communication cues. A recent study that conducted analysis of conversation data during team interaction showed that it is possible to predict team performance from linguistic and acoustic features [2]. In another study, speech features extracted from conversations between team members were significantly correlated with team collaboration effort [3]. Identifying the vocal behaviors that affect team functioning on a moment-to-moment basis can yield a unique set of materials for personalized team training (e.g., video playback [4]) aimed at proactively addressing communication challenges, optimizing collaboration, and ensuring safer, more efficient cardiac surgery procedures. This paper investigates acoustic measures of speech extracted from speech during two phases of simulated cardiac surgery operations representing good and poor interprofessional communication behaviors and examines potential differences in acoustic measures between sessions.