An analysis of previous studies finds that the use of technology-enhanced simulation training in health professions education, in comparison with no intervention, is associated with large effects for outcomes of knowledge, skills, and behaviors and more moderate effects for patient-related outcomes, according to an article in the September 7 issue of JAMA, a medical education theme issue.
“Responding to changing practice environments requires new models for training health care professionals. Technology-enhanced simulation is one possible solution,” according to background information in the article. The authors defined technology broadly as materials and devices created or adapted to solve practical problems. “Simulation technologies encompass diverse products including computer-based virtual reality simulators, high-fidelity and static mannequins, plastic models, live animals, inert animal products, and human cadavers.”
The researchers add that although technology-enhanced simulation has widespread appeal and many assert its educational usefulness, such beliefs presently lack empirical support. “Despite the large volume of research on simulation, its effectiveness remains uncertain in part because of the difficulty in interpreting research results one study at a time.”
David A. Cook, M.D., M.H.P.E., of the Mayo Clinic College of Medicine, Rochester, Minn., and colleagues conducted a review and meta-analysis to identify and quantitatively summarize studies of technology-enhanced simulation involving health professions learners. The authors searched the literature for original research evaluating simulation compared with no intervention for training practicing and student physicians, nurses, dentists, and other health care professionals. There were 609 eligible studies identified, enrolling 35,226 trainees. Of these, 137 were randomized studies, 67 were nonrandomized studies with 2 or more groups, and 405 used a single-group pretest-posttest design.
The researchers found that technology-enhanced simulations, in comparison with no intervention or when added to traditional practice, were with only rare exceptions associated with better learning outcomes. Pooled effect sizes were large for knowledge, skills, and behaviors. Effect sizes for patient-related outcomes were smaller but still moderate. In nearly all cases, the magnitude of association varied substantially for individual studies (high inconsistency), and subgroup analyses exploring simulation design differences largely failed to explain this variation.
“The important questions for this field are those that clarify when and how to use simulation most effectively and cost-efficiently. Unfortunately, the evidence synthesized herein largely fails to inform the design of future simulation activities. Subgroup analyses weakly suggested a benefit to extending training beyond 1 day and using a mastery model but otherwise did not identify consistent associations involving instructional designs. However, between-study (rather than within-study) comparisons are an inefficient research method. Thus, theory-based comparisons between different technology-enhanced simulation designs (simulation vs. simulation studies) that minimize bias, achieve appropriate power, and avoid confounding, as well as rigorous qualitative studies, are necessary to clarify how and when to effectively use technology-enhanced simulations for training health care professionals,” the authors conclude.