Κυριακή 13 Οκτωβρίου 2019

Soft tissue deformation tracking by means of an optimized fiducial marker layout with application to cancer tumors

Abstract

Objective

Interventional radiology methods have been adopted for intraoperative control of the surgical region of interest (ROI) in a wide range of minimally invasive procedures. One major obstacle that hinders the success of procedures using interventional radiology methods is the preoperative and intraoperative deformation of the ROI. While fiducial markers (FM) tracing has been shown to be promising in tracking such deformations, determining the optimal placement of the FM in the ROI remains a significant challenge. The current study proposes a computational framework to address this problem by preoperatively optimizing the layout of FM, thereby enabling an accurate tracking of the ROI deformations.

Methods

The proposed approach includes three main components: (1) creation of virtual deformation benchmarks, (2) method of predicting intraoperative tissue deformation based on FM registration, and (3) FM layout optimization. To account for the large variety of potential ROI deformations, virtual benchmarks are created by applying a multitude of random force fields on the tumor surface in physically based simulations. The ROI deformation prediction is carried out by solving the inverse problem of finding the smoothest force field that leads to the observed FM displacements. Based on this formulation, a simulated annealing approach is employed to optimize the FM layout that produces the best prediction accuracy.

Results

The proposed approach is capable of finding an FM layout that outperforms the rationally chosen layouts by 40% in terms of ROI prediction accuracy. For a maximum induced displacement of 20 mm on the tumor surface, the average maximum error between the benchmarks and our FM-optimized predictions is about 1.72 mm, which falls within the typical resolution of ultrasound imaging.

Conclusions

The proposed framework can optimize FM layout to effectively reduce the errors in the intraoperative deformation prediction process, thus bridging the gap between preoperative imaging and intraoperative tissue deformation.

Some germinal roots of AI and their impact on Computer Assisted Radiology and Surgery (CARS)

A microsurgical robot research platform for robot-assisted microsurgery research and training

Abstract

Purpose

Ocular surgery, ear, nose and throat surgery and neurosurgery are typical types of microsurgery. A versatile training platform can assist microsurgical skills development and accelerate the uptake of robot-assisted microsurgery (RAMS). However, the currently available platforms are mainly designed for macro-scale minimally invasive surgery. There is a need to develop a dedicated microsurgical robot research platform for both research and clinical training.

Methods

A microsurgical robot research platform (MRRP) is introduced in this paper. The hardware system includes a slave robot with bimanual manipulators, two master controllers and a vision system. It is flexible to support multiple microsurgical tools. The software architecture is developed based on the robot operating system, which is extensible at high-level control. The selection of master–slave mapping strategy was explored, while comparisons were made between different interfaces.

Results

Experimental verification was conducted based on two microsurgical tasks for training evaluation, i.e. trajectory following and targeting. User study results indicated that the proposed hybrid interface is more effective than the traditional approach in terms of frequency of clutching, task completion time and ease of control.

Conclusion

Results indicated that the MRRP can be utilized for microsurgical skills training, since motion kinematic data and vision data can provide objective means of verification and scoring. The proposed system can further be used for verifying high-level control algorithms and task automation for RAMS research.

Objective classification of psychomotor laparoscopic skills of surgeons based on three different approaches

Abstract

Background

The determination of surgeons’ psychomotor skills in minimally invasive surgery techniques is one of the major concerns of the programs of surgical training in several hospitals. Therefore, it is important to assess and classify objectively the level of experience of surgeons and residents during their training process. The aim of this study was to investigate three classification methods for establishing automatically the level of surgical competence of the surgeons based on their psychomotor laparoscopic skills.

Methods

A total of 43 participants, divided into an experienced surgeons group with ten experts (> 100 laparoscopic procedures performed) and non-experienced surgeons group with 24 residents and nine medical students (< 10 laparoscopic procedures performed), performed three tasks in the EndoViS training system. Motion data of the instruments were captured with a video-tracking system built into the EndoViS simulator and analyzed using 13 motion analysis parameters (MAPs). Radial basis function networks (RBFNets), K-star (K*), and random forest (RF) were used for classifying surgeons based on the MAPs’ scores of all participants. The performance of the three classifiers was examined using hold-out and leave-one-out validation techniques.

Results

For all three tasks, the K-star method was superior in terms of accuracy and AUC in both validation techniques. The mean accuracy of the classifiers was 93.33% for K-star, 87.58% for RBFNets, and 84.85% for RF in hold-out validation, and 91.47% for K-star, 89.92% for RBFNets, and 83.72% for RF in leave-one-out cross-validation.

Conclusions

The three proposed methods demonstrated high performance in the classification of laparoscopic surgeons, according to their level of psychomotor skills. Together with motion analysis and three laparoscopic tasks of the Fundamental Laparoscopic Surgery Program, these classifiers provide a means for objectively classifying surgical competence of the surgeons for existing laparoscopic box trainers.

Combining position-based dynamics and gradient vector flow for 4D mitral valve segmentation in TEE sequences

Abstract

Purpose

For planning and guidance of minimally invasive mitral valve repair procedures, 3D+t transesophageal echocardiography (TEE) sequences are acquired before and after the intervention. The valve is then visually and quantitatively assessed in selected phases. To enable a quantitative assessment of valve geometry and pathological properties in all heart phases, as well as the changes achieved through surgery, we aim to provide a new 4D segmentation method.

Methods

We propose a tracking-based approach combining gradient vector flow (GVF) and position-based dynamics (PBD). An open-state surface model of the valve is propagated through time to the closed state, attracted by the GVF field of the leaflet area. The PBD method ensures topological consistency during deformation. For evaluation, one expert in cardiac surgery annotated the closed-state leaflets in 10 TEE sequences of patients with normal and abnormal mitral valves, and defined the corresponding open-state models.

Results

The average point-to-surface distance between the manual annotations and the final tracked model was \(1.00\,\hbox {mm} \pm 1.08\,\hbox {mm}\) . Qualitatively, four cases were satisfactory, five passable and one unsatisfactory. Each sequence could be segmented in 2–6 min.

Conclusion

Our approach enables to segment the mitral valve in 4D TEE image data with normal and pathological valve closing behavior. With this method, in addition to the quantification of the remaining orifice area, shape and dimensions of the coaptation zone can be analyzed and considered for planning and surgical result assessment.

Deep learning for World Health Organization grades of pancreatic neuroendocrine tumors on contrast-enhanced magnetic resonance images: a preliminary study

Abstract

Purpose

The World Health Organization (WHO) grading system of pancreatic neuroendocrine tumor (PNET) plays an important role in the clinical decision. The rarity of PNET often negatively affects the radiological application of deep learning algorithms due to the low availability of radiological images. We tried to investigate the feasibility of predicting WHO grades of PNET on contrast-enhanced magnetic resonance (MR) images by deep learning algorithms.

Materials and methods

Ninety-six patients with PNET underwent preoperative contrast-enhanced MR imaging. Fivefold cross-validation was used in which five iterations of training and validation were performed. Within every iteration, on the training set augmented by synthetic images generated from generative adversarial network (GAN), a convolutional neural network (CNN) was trained and its performance was evaluated on the paired internal validation set. Finally, the trained CNNs from cross-validation and their averaged counterpart were separately assessed on another ten patients from a different external validation set.

Results

Averaging the results across the five iterations in the cross-validation, for the CNN model, the average accuracy was 85.13% ± 0.44% and micro-average AUC was 0.9117 ± 0.0053. Evaluated on the external validation set, the average accuracy of the five trained CNNs ranges between 79.08 and 82.35%, and the range of micro-average AUC was between 0.8825 and 0.8932. The average accuracy and micro-average AUC of the averaged CNN were 81.05% and 0.8847, respectively.

Conclusion

Synthetic images generated from GAN could be used to alleviate the difficulty of radiological image collection for uncommon disease like PNET. With the help of GAN, the CNN showed the potential to predict the WHO grades of PNET on contrast-enhanced MR images.

Memory-efficient 2.5D convolutional transformer networks for multi-modal deformable registration with weak label supervision applied to whole-heart CT and MRI scans

Abstract

Purpose 

Despite its potential for improvements through supervision, deep learning-based registration approaches are difficult to train for large deformations in 3D scans due to excessive memory requirements.

Methods 

We propose a new 2.5D convolutional transformer architecture that enables us to learn a memory-efficient weakly supervised deep learning model for multi-modal image registration. Furthermore, we firstly integrate a volume change control term into the loss function of a deep learning-based registration method to penalize occurring foldings inside the deformation field.

Results 

Our approach succeeds at learning large deformations across multi-modal images. We evaluate our approach on 100 pair-wise registrations of CT and MRI whole-heart scans and demonstrate considerably higher Dice Scores (of 0.74) compared to a state-of-the-art unsupervised discrete registration framework (deeds with Dice of 0.71).

Conclusion 

Our proposed memory-efficient registration method performs better than state-of-the-art conventional registration methods. By using a volume change control term in the loss function, the number of occurring foldings can be considerably reduced on new registration cases.

Touchless scanner control to support MRI-guided interventions

Abstract

Purpose

MRI-guided interventions allow minimally invasive, radiation-free treatment but rely on real-time image data and free slice positioning. Interventional interaction with the data and the MRI scanner is cumbersome due to the diagnostic focus of current systems, confined space and sterile conditions.

Methods

We present a touchless, hand-gesture-based interaction concept to control functions of the MRI scanner typically used during MRI-guided interventions. The system consists of a hand gesture sensor customised for MRI compatibility and a specialised UI that was developed based on clinical needs. A user study with 10 radiologists was performed to compare the gesture interaction concept and its components to task delegation—the prevalent method in clinical practice.

Results

Both methods performed comparably in terms of task duration and subjective workload. Subjective performance with gesture input was perceived as worse compared to task delegation, but was rated acceptable in terms of usability while task delegation was not.

Conclusion

This work contributes by (1) providing access to relevant functions on an MRI scanner during percutaneous interventions in a (2) suitable way for sterile human–computer interaction. The introduced concept removes indirect interaction with the scanner via an assistant, which leads to comparable subjective workload and task completion times while showing higher perceived usability.

PET/CT-guided biopsy with respiratory motion correction

Abstract

Purpose

Given the ability of positron emission tomography (PET) imaging to localize malignancies in heterogeneous tumors and tumors that lack an X-ray computed tomography (CT) correlate, combined PET/CT-guided biopsy may improve the diagnostic yield of biopsies. However, PET and CT images are naturally susceptible to problems due to respiratory motion, leading to imprecise tumor localization and shape distortion. To facilitate PET/CT-guided needle biopsy, we developed and investigated the feasibility of a workflow that allows to bring PET image guidance into interventional CT suite while accounting for respiratory motion.

Methods

The performance of PET/CT respiratory motion correction using registered and summed phases method was evaluated through computer simulations using the mathematical 4D extended cardiac-torso phantom, with motion simulated from real respiratory traces. The performance of PET/CT-guided biopsy procedure was evaluated through operation on a physical anthropomorphic phantom. Vials containing radiolabeled 18F-fluorodeoxyglucose were placed within the physical phantom thorax as biopsy targets. We measured the average distance between target center and the simulated biopsy location among multiple trials to evaluate the biopsy localization accuracy.

Results

The computer simulation results showed that the RASP method generated PET images with a significantly reduced noise of 0.10 ± 0.01 standardized uptake value (SUV) as compared to an end-of-expiration image noise of 0.34 ± 0.04 SUV. The respiratory motion increased the apparent liver lesion size from 5.4 ± 1.1 to 35.3 ± 3.0 cc. The RASP algorithm reduced this to 15.7 ± 3.7 cc. The distances between the centroids for the static image lesion and two moving lesions in the liver and lung, when reconstructed with the RASP algorithm, were 0.83 ± 0.72 mm and 0.42 ± 0.72 mm. For the ungated imaging, these values increased to 3.48 ± 1.45 mm and 2.5 ± 0.12 mm, respectively. For the ungated imaging, this increased to 1.99 ± 1.72 mm. In addition, the lesion activity estimation (e.g., SUV) was accurate and constant for images reconstructed using the RASP algorithm, whereas large activity bias and variations (± 50%) were observed for lesions in the ungated images. The physical phantom studies demonstrated a biopsy needle localization error of 2.9 ± 0.9 mm from CT. Combined with the localization errors due to respiration for the PET images from simulations, the overall estimated lesion localization error would be 3.08 mm for PET-guided biopsies images using RASP and 3.64 mm when using ungated PET images. In other words, RASP reduced the localization error by approximately 0.6 mm. The combined error analysis showed that replacing the standard end-of-expiration images with the proposed RASP method in PET/CT-guided biopsy workflow yields comparable lesion localization accuracy and reduced image noise.

Conclusion

The RASP method can produce PET images with reduced noise, attenuation artifacts and respiratory motion, resulting in more accurate lesion localization. Testing the PET/CT-guided biopsy workflow using computer simulation and physical phantoms with respiratory motion, we demonstrated that guided biopsy procedure with the RASP method can benefit from improved PET image quality due to noise reduction, without compromising the accuracy of lesion localization.

CIGuide: in situ augmented reality laser guidance

Abstract

Purpose 

A robotic intraoperative laser guidance system with hybrid optic-magnetic tracking for skull base surgery is presented. It provides in situ augmented reality guidance for microscopic interventions at the lateral skull base with minimal mental and workload overhead on surgeons working without a monitor and dedicated pointing tools.

Methods 

Three components were developed: a registration tool (Rhinospider), a hybrid magneto-optic-tracked robotic feedback control scheme and a modified robotic end-effector. Rhinospider optimizes registration of patient and preoperative CT data by excluding user errors in fiducial localization with magnetic tracking. The hybrid controller uses an integrated microscope HD camera for robotic control with a guidance beam shining on a dual plate setup avoiding magnetic field distortions. A robotic needle insertion platform (iSYS Medizintechnik GmbH, Austria) was modified to position a laser beam with high precision in a surgical scene compatible to microscopic surgery.

Results 

System accuracy was evaluated quantitatively at various target positions on a phantom. The accuracy found is 1.2 mm ± 0.5 mm. Errors are primarily due to magnetic tracking. This application accuracy seems suitable for most surgical procedures in the lateral skull base. The system was evaluated quantitatively during a mastoidectomy of an anatomic head specimen and was judged useful by the surgeon.

Conclusion 

A hybrid robotic laser guidance system with direct visual feedback is proposed for navigated drilling and intraoperative structure localization. The system provides visual cues directly on/in the patient anatomy, reducing the standard limitations of AR visualizations like depth perception. The custom- built end-effector for the iSYS robot is transparent to using surgical microscopes and compatible with magnetic tracking. The cadaver experiment showed that guidance was accurate and that the end-effector is unobtrusive. This laser guidance has potential to aid the surgeon in finding the optimal mastoidectomy trajectory in more difficult interventions.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου