AI Summary • Published on Apr 28, 2026
Surgical training often relies on generic virtual reality (VR) simulations that do not adapt to individual patient cases, leading to limited engagement and underutilization within training programs. While multimodal medical imaging (CT and MRI) is crucial for understanding patient-specific anatomy in complex procedures like spine surgery, creating detailed 3D models from these images for planning or education is traditionally time-consuming, requires extensive technical expertise, and involves multiple software packages. This significantly restricts the routine clinical and educational application of patient-specific 3D anatomical models, despite their potential to improve surgical comprehension and planning.
This study developed an integrated framework for creating fast, highly automated, and high-fidelity patient-specific 3D spinal models from CT and MRI data for VR surgical simulation. The core methodology involved an AI-based image analysis pipeline that automatically performed multimodal image fusion (registration of CT and MRI) and segmentation of relevant anatomical structures (vertebrae, intervertebral discs, neural elements) using custom deep learning architectures (VertDetect, nnU-Net, SCT) and open-source tools. For some soft tissue structures like nerve roots and ligamentum flavum, manual contouring was still required. These generated 3D models were then imported into a custom VR simulation module, which allowed users to perform interactive spinal decompression procedures (laminectomy, disc resection, foraminotomy) in a virtual operating room environment. The simulation featured real-time mesh modification during tissue removal, auditory feedback, and a visual/auditory alarm system for proximity to neural structures. The system also enabled post-procedural assessment and collaborative interaction. The pipeline's performance was quantitatively evaluated using Dice Similarity Coefficient (DSC) for segmentation accuracy, Target Registration Error (TRE) for registration accuracy, and computational time. Qualitative feedback was gathered from two staff surgeons and three trainees through semi-structured interviews, assessing the system's impact on medical practice, usability, learner confidence, and engagement.
The developed framework successfully created high-fidelity patient-specific 3D models for all 15 included cases, with a rapid mean total model creation time of approximately 2.5 minutes per case. Quantitative evaluation demonstrated high accuracy in image processing: vertebral bone segmentation achieved a mean DSC of 0.95 ± 0.03, intervertebral disc segmentation a DSC of 0.87 ± 0.04, and neural element segmentation a DSC of 0.92 ± 0.01. Following deformable registration, the mean Target Registration Error (TRE) was 1.73 ± 0.42 mm across patients. Qualitative feedback from staff surgeons and trainees was overwhelmingly positive, highlighting significant improvements in spatial understanding of spinal anatomy and pathology. Participants reported increased procedural confidence, enhanced ability to visualize critical anatomical structures, and perceived high educational value, especially for rehearsing real patient cases. The immersive experience and ability to practice step-by-step procedures were particularly appreciated, leading to strong support for integrating the VR simulation into existing clinical workflows for both surgical training and preoperative planning.
This innovative platform dramatically reduces the time and cost associated with creating patient-specific 3D models for surgical simulation, thereby making immersive VR-based training more feasible and scalable for routine integration into surgical education and preoperative planning. By providing an interactive, immersive, and risk-free environment for rehearsing spinal decompression procedures with actual patient data, the system significantly enhances anatomical understanding, improves decision-making skills, and builds procedural confidence among trainees. This technology holds substantial potential to improve patient outcomes, optimize operating room efficiency, and foster the development of more proficient surgeons by offering consistent, reproducible, and collaborative learning experiences that bridge the gap between theoretical knowledge and practical application. Future work aims to further automate soft tissue segmentation and validate the framework in larger, more diverse multicenter cohorts, while also incorporating objective performance metrics.