Methods for autoregistration of arthroscopic video images to preoperative models and devices thereof
Inventors
Quist, Brian William • NETRAVALI, Nathan Anil • Torrie, Paul Alexander
Assignees
Smith and Nephew Orthopaedics AG • Smith and Nephew Asia Pacific Pte Ltd • Smith and Nephew Inc
Publication Number
US-12343218-B2
Publication Date
2025-07-01
Expiration Date
2041-06-16
Interested in licensing this patent?
MTEC can help explore whether this patent might be available for licensing for your application.
Abstract
Surgical methods and devices that facilitate registration of arthroscopic video to preoperative models are disclosed. With this technology, a machine learning model is applied to diagnostic video data captured via an arthroscope to identify an anatomical structure. An anatomical structure in a three-dimensional (3D) anatomical model is registered to the anatomical structure represented in the diagnostic video data. The 3D anatomical model is generated from preoperative image data. The anatomical structure is then tracked intraoperatively based on the registration and without requiring fixation of fiducial markers to the patient anatomy. A simulated projected view of the registered anatomical structure is generated from the 3D anatomical model based on a determined orientation of the arthroscope during capture of intraoperative video data. The simulated projected view is scaled and oriented based on one or more landmark features of the anatomical structure extracted from the intraoperative video data.
Core Innovation
The invention relates to surgical methods, systems, and devices that enable automatic registration of arthroscopic video images to preoperative three-dimensional anatomical models without requiring the fixation of fiducial markers to patient anatomy. By applying a machine learning model to diagnostic video data captured via an arthroscope during a diagnostic review phase, anatomical structures depicted in the arthroscopic video can be identified and registered to corresponding anatomical structures represented in a 3D model generated from preoperative image data. This registration facilitates intraoperative tracking of the anatomical structures and generation of simulated projected views scaled and oriented based on extracted landmark features from the intraoperative video data.
The technology further supports generating overlays comprising the scaled and oriented simulated projected views that can be merged with intraoperative video data and presented on display devices such as mixed reality headsets. The system can determine the stage of the surgical procedure based on a surgical plan and the identified anatomical structures, enabling presentation of guidance extracted from the surgical plan, including textual directions and visual indications of subsequent anatomical structures to be addressed during surgery.
Additionally, the system can handle soft tissue anatomical structures, morphing three-dimensional models through additional machine learning models based on soft tissue size and position determined from intraoperative video data. The invention also contemplates training the machine learning model on annotated video data for improved accuracy, outputting to projectors based on user eye position, and weighting portions of the 3D model to generate simulated views including relevant subsets based on these weightings.
Claims Coverage
The patent includes independent claims directed to a method for registration of arthroscopic video to preoperative models and a surgical computing device configured to perform such registration, comprising several inventive features.
Applying machine learning to arthroscopic video to identify anatomical structures
A machine learning model is applied to diagnostic video data captured via an arthroscope to identify anatomical structures represented in the video data.
Registration of anatomical structures in preoperative 3D models to video data
Registration of one of a plurality of anatomical structures in a three-dimensional anatomical model generated from preoperative image data to the anatomical structure represented in the diagnostic video data.
Intraoperative tracking based on the registration
Tracking the anatomical structure intraoperatively based on the registration achieved.
Generation and output of simulated projected views
Generation of a simulated projected view of the registered anatomical structure from the 3D anatomical model based on a determined orientation of the arthroscope during intraoperative video capture, and outputting the simulated view scaled and oriented based on landmark features extracted from the intraoperative video.
Overlay generation and merging with intraoperative video
Generating an overlay including the scaled and oriented simulated projected view, merging it with intraoperative video data based on the registration, and outputting the merged video data to a display device.
Surgical procedure state determination and guidance overlay
Determining a stage of the surgical procedure based on the surgical plan and identification of anatomical structures, and generating an overlay with guidance extracted from the surgical plan including textual directions or visual indications for subsequent tasks.
Annotation and indication of anatomical points in overlays
Obtaining an annotated version of the 3D anatomical model or preoperative image data indicating anatomical points or landmarks, and including indications of these points in the generated overlay.
Use of mixed reality headset and tracking for overlay generation
Tracking position or orientation of a mixed reality headset and generating overlays by determining spatial and scale relationships between the arthroscope field of view and the mixed reality headset.
Training the machine learning model with annotated video data
Training the machine learning model based on additional video data comprising annotated representations of anatomical structures.
User eye position-based projection onto patient skin
Determining a user’s eye position and outputting the simulated projected view to a projector for projection onto patient skin accordingly.
Soft tissue morphing using additional machine learning models
Determining size and position of soft tissue portions from intraoperative video and applying another machine learning model to generate morphed states of soft tissue in the simulated projected view.
Weighting and selective inclusion of 3D model portions in simulated views
Generating weighting values for portions of the 3D anatomical model and using these to generate simulated projected views that include selected subsets of model portions.
The independent claims cover both the method and apparatus for automatic registration of arthroscopic images to preoperative 3D anatomical models using machine learning, intraoperative tracking, simulated view generation, and overlay guidance to assist surgical workflows, including features for soft tissue morphology, mixed reality integration, and user-specific projections.
Stated Advantages
Reduces risk of damage and complications by eliminating the need for physical fiducial markers affixed to patient anatomy for surgical tracking.
Improves efficiency and accuracy of surgical tracking by enabling automatic registration during a standard diagnostic phase of arthroscopic procedures.
Provides enhanced visualization through overlays and simulated views, facilitating intraoperative guidance and anatomical tracking.
Allows real-time tracking and morphing of soft tissue structures, enhancing the accuracy of anatomical representation during surgery.
Supports integration with mixed reality devices and projector-based displays to augment surgeon's field of view and interaction with anatomical data.
Documented Applications
Arthroscopic surgical procedures involving joints such as knee, hip, and shoulder arthroplasties.
Arthroscopic procedures requiring intraoperative visualization and tracking of anatomical structures.
Procedures involving registration and alignment of anatomical structures based on preoperative imaging and intraoperative video.
Interested in licensing this patent?