Τρίτη 16 Ιουλίου 2019

Computer Assisted Radiology and Surgery

A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy

Abstract

Purpose Video see-through augmented reality (VST-AR) navigation for laparoscopic partial nephrectomy (LPN) can enhance intraoperative perception of surgeons by visualizing surgical targets and critical structures of the kidney tissue. Image registration is the main challenge in the procedure. Existing registration methods in laparoscopic navigation systems suffer from limitations such as manual alignment, invasive external marker fixation, relying on external tracking devices with bulky tracking sensors and lack of deformation compensation. To address these issues, we present a markerless automatic deformable registration framework for LPN VST-AR navigation.

Method

Dense stereo matching and 3D reconstruction, automatic segmentation and surface stitching are combined to obtain a larger dense intraoperative point cloud of the renal surface. A coarse-to-fine deformable registration is performed to achieve a precise automatic registration between the intraoperative point cloud and the preoperative model using the iterative closest point algorithm followed by the coherent point drift algorithm. Kidney phantom experiments and in vivo experiments were performed to evaluate the accuracy and effectiveness of our approach.

Results

The average segmentation accuracy rate of the automatic segmentation was 94.9%. The mean target registration error of the phantom experiments was found to be 1.28 ± 0.68 mm (root mean square error). In vivo experiments showed that tumor location was identified successfully by superimposing the tumor model on the laparoscopic view.

Conclusion

Experimental results have demonstrated that the proposed framework could accurately overlay comprehensive preoperative models on deformable soft organs automatically in a manner of VST-AR without using extra intraoperative imaging modalities and external tracking devices, as well as its potential clinical use.

Automatic detection of intracranial aneurysm using LBP and Fourier descriptor in angiographic images

Abstract

Purpose

Intracranial aneurysms (IA) are abnormal dilatation of the arteries at the circle of Willis whose rupture can lead to catastrophic complications such as hemorrhagic stroke. The purpose of this work is to detect IA in 2D-DSA images. The proposed detection framework uses local binary patterns for the determination of initial aneurysm candidates and generic Fourier descriptor (GFD) for false positive removal.

Methods

Here, the designed framework takes DSA images including IA as input and produces images where the IA is clearly identified and localized. The multi-step approach is defined as the following: The first phase presents the determination of initial aneurysm candidates using the uniform local binary patterns (LBPs). The LBPs are calculated from these images in order to identify texture contents of both aneurysm and no-aneurysm classes. The second phase presents the false positives removal using a shape descriptor based on contours: the GFD.

Results

We demonstrated that the proposed detection method successfully recognized morphological features of intracranial aneurysm. The results demonstrated excellent agreement between manual and automated detections. With the computerized IA detection framework, all aneurysms were correctly detected with zero false negative and low FP rates.

Conclusion

This study shows the potential of LBP and GFD as a feature descriptors and paves the way for a whole image analysis tool to predict intracranial aneurysm risk of rupture.

Liver tissue segmentation in multiphase CT scans using cascaded convolutional neural networks

Abstract

Purpose

We address the automatic segmentation of healthy and cancerous liver tissues (parenchyma, active and necrotic parts of hepatocellular carcinoma (HCC) tumor) on multiphase CT images using a deep learning approach.

Methods

We devise a cascaded convolutional neural network based on the U-Net architecture. Two strategies for dealing with multiphase information are compared: Single-phase images are concatenated in a multi-dimensional features map on the input layer, or output maps are computed independently for each phase before being merged to produce the final segmentation. Each network of the cascade is specialized in the segmentation of a specific tissue. The performances of these networks taken separately and of the cascaded architecture are assessed on both single-phase and on multiphase images.

Results

In terms of Dice coefficients, the proposed method is on par with a state-of-the-art method designed for automatic MR image segmentation and outperforms previously used technique for interactive CT image segmentation. We validate the hypothesis that several cascaded specialized networks have a higher prediction accuracy than a single network addressing all tasks simultaneously. Although the portal venous phase alone seems to provide sufficient contrast for discriminating tumors from healthy parenchyma, the multiphase information brings significant improvement for the segmentation of cancerous tissues (active versus necrotic part).

Conclusion

The proposed cascaded multiphase architecture showed promising performances for the automatic segmentation of liver tissues, allowing to reliably estimate the necrosis rate, a valuable imaging biomarker of the clinical outcome.

Computer-aided diagnosis of cirrhosis and hepatocellular carcinoma using multi-phase abdomen CT

Abstract

Purpose

High mortality rate due to liver cirrhosis has been reported over the globe in the previous years. Early detection of cirrhosis may help in controlling the disease progression toward hepatocellular carcinoma (HCC). The lack of trained CT radiologists and increased patient population delays the diagnosis and further management. This study proposes a computer-aided diagnosis system for detecting cirrhosis and HCC in a very efficient and less time-consuming approach.

Methods

Contrast-enhanced CT dataset of 40 patients (n = 40; M:F = 5:3; age = 25–55 years) with three groups of subjects: healthy (n = 14), cirrhosis (n = 12) and cirrhosis with HCC (n = 14), were retrospectively analyzed in this study. A novel method for the automatic 3D segmentation of liver using modified region-growing segmentation technique was developed and compared with the state-of-the-art deep learning-based technique. Further, histogram parameters were calculated from segmented CT liver volume for classification between healthy and diseased (cirrhosis and HCC) liver using logistic regression. Multi-phase analysis of CT images was performed to extract 24 temporal features for detecting cirrhosis and HCC liver using support vector machine (SVM).

Results

The proposed method produced improved 3D segmentation with Dice coefficient 90% for healthy liver, 86% for cirrhosis and 81% for HCC subjects compared to the deep learning algorithm (healthy: 82%; cirrhosis: 78%; HCC: 70%). Standard deviation and kurtosis were found to be statistically different (p < 0.05) among healthy and diseased liver, and using logistic regression, classification accuracy obtained was 92.5%. For detecting cirrhosis and HCC liver, SVM with RBF kernel obtained highest slice-wise and patient-wise prediction accuracy of 86.9% (precision = 0.93, recall = 0.7) and 80% (precision = 0.86, recall = 0.75), respectively, than that of linear kernel (slice-wise: accuracy = 85.4%, precision = 0.92, recall = 0.67; patient-wise: accuracy = 73.33%, precision = 0.75, recall = 0.75).

Conclusions

The proposed computer-aided diagnosis system for detecting cirrhosis and hepatocellular carcinoma (HCC) showed promising results and can be used as effective screening tool in medical image analysis.

The hind- and midfoot alignment computed after a medializing calcaneal osteotomy using a 3D weightbearing CT

Abstract

Purpose

A medializing calcaneal osteotomy (MCO) is a surgical procedure frequently performed to correct an adult acquired flatfoot (AAFD) deformity. However, most studies are limited to a 2D analysis of 3D deformity. Therefore, the aim is to perform a 3D assessment of the hind- and midfoot alignment using a weightbearing CT (WBCT) preoperatively as well as postoperatively.

Methods

Eighteen patients with a mean age of 49.4 years (range 18–67) were prospectively included in a pre–post-study design. A MCO was performed and a WBCT was obtained pre- and postoperative. Images were converted into 3D models to compute linear and angular measurements, respectively, in millimeters (mm) and degrees (°), based on previously reported landmarks of the hind- and midfoot alignment. A regression analysis was performed between the displacement of a MCO and the obtained postoperative correction.

Results

The mean 3D hindfoot angle improved significantly preoperative compared to postoperative (p < 0.001). This appeared according to a linear relation with the amount of medial translation in a MCO (R2 = 0.84, p < 0.001). The axes of the tibia showed significant coronal as well as axial changes (p < 0.05). Analysis of the midfoot showed significant changes in the navicular height and rotation as well as the Méary angle (p < 0.05). Additionally, a linear trend between the midfoot measurements and amount of medial translation in a MCO was observed, but not significant (p > 0.05).

Conclusion

This study demonstrates an effective 3D correction of an AAFD by a MCO according to a linear relationship. The concomitant formula can be used to perform a preoperative planning. The novelty is the comparative 3D weightbearing CT assessment of both the computed hind- and midfoot alignment after a medializing calcaneus osteotomy. This could improve accuracy of the currently performed preoperative planning in clinical practice.

Dynamic contrast-enhanced computed tomography diagnosis of primary liver cancers using transfer learning of pretrained convolutional neural networks: Is registration of multiphasic images necessary?

Abstract

Purpose

To evaluate the effect of image registration on the diagnostic performance of transfer learning (TL) using pretrained convolutional neural networks (CNNs) and three-phasic dynamic contrast-enhanced computed tomography (DCE-CT) for primary liver cancers.

Methods

We retrospectively evaluated 215 consecutive patients with histologically proven primary liver cancers, including six early, 58 well-differentiated, 109 moderately differentiated, 29 poorly differentiated hepatocellular carcinomas (HCCs), and 13 non-HCC malignant lesions containing cholangiocellular components. We performed TL using various pretrained CNNs and preoperative three-phasic DCE-CT images. Three-phasic DCE-CT images were manually registered to correct respiratory motion. The registered DCE-CT images were then assigned to the three color channels of an input image for TL: pre-contrast, early phase, and delayed phase images for the blue, red, and green channels, respectively. To evaluate the effects of image registration, the registered input image was intentionally misaligned in the three color channels by pixel shifts, rotations, and skews with various degrees. The diagnostic performances (DP) of the pretrained CNNs after TL in the test set were compared by three general radiologists (GRs) and two experienced abdominal radiologists (ARs). The effects of misalignment in the input image and the type of pretrained CNN on the DP were statistically evaluated.

Results

The mean DPs for histological subtype classification and differentiation in primary malignant liver tumors on DCE-CT for GR and AR were 39.1%, and 47.9%, respectively. The highest mean DPs for CNNs after TL with pixel shifts, rotations, and skew misalignments were 44.1%, 44.2%, and 43.7%, respectively. Two-way analysis of variance revealed that the DP is significantly affected by the type of pretrained CNN (P = 0.0001), but not by misalignments in input images other than skew deformations.

Conclusion

TL using pretrained CNNs is robust against misregistration of multiphasic images and comparable to experienced ARs in classifying primary liver cancers using three-phasic DCE-CT.

Cognitive load associations when utilizing auditory display within image-guided neurosurgery

Abstract

Purpose

The combination of data visualization and auditory display (e.g., sonification) has been shown to increase accuracy, and reduce perceived difficulty, within 3D navigation tasks. While accuracy within such tasks can be measured in real time, subjective impressions about the difficulty of a task are more elusive to obtain. Prior work utilizing electrophysiology (EEG) has found robust support that cognitive load and working memory can be monitored in real time using EEG data.

Methods

In this study, we replicated a 3D navigation task (within the context of image-guided surgery) while recording data pertaining to participants’ cognitive load through the use of EEG relative alpha-band weighting data. Specifically, 13 subjects navigated a tracked surgical tool to randomly placed 3D virtual locations on a CT cerebral angiography volume while being aided by visual, aural, or both visual and aural feedback. During the study EEG data were captured from the participants, and after the study a NASA TLX questionnaire was filled out by the subjects. In addition to replicating an existing experimental design on auditory display within image-guided neurosurgery, our primary aim sought to determine whether EEG-based markers of cognitive load mirrored subjective ratings of task difficulty

Results

Similar to existing literature, our study found evidence consistent with the hypothesis that auditory display can increase the accuracy of navigating to a specified target. We also found significant differences in cognitive working load across different feedback modalities, but none of which supported the experiments hypotheses. Finally, we found mixed results regarding the relationship between real-time measurements of cognitive workload and a posteriori subjective impressions of task difficulty.

Conclusions

Although we did not find a significant correlation between the subjective and physiological measurements, differences in cognitive working load were found. As well, our study further supports the use of auditory display in image-guided surgery.

Extending BPMN 2.0 for intraoperative workflow modeling with IEEE 11073 SDC for description and orchestration of interoperable, networked medical devices

Abstract

Purpose

Surgical workflow management in integrated operating rooms (ORs) enables the implementation of novel computer-aided surgical assistance and new applications in process automation, situation awareness, and decision support. The context-sensitive configuration and orchestration of interoperable, networked medical devices is a prerequisite for an effective reduction in the surgeons’ workload, by providing the right service and right information at the right time. The information about the surgical situation must be described as surgical process models and distributed to the medical devices and IT systems in the OR. Available modeling languages are not capable of describing surgical processes for this application.

Methods

In this work, the BPMNSIX modeling language for intraoperative processes is technically enhanced and implemented for workflow build-time and run-time. Therefore, particular attention is given to the integration of the recently published IEEE 11073 SDC standard family for a service-oriented architecture of networked medical devices. In addition, interaction patterns for context-aware configuration and device orchestration were presented.

Results

The identified interaction patterns were implemented in BPMNSIX for an ophthalmologic use case. Therefore, the examples of the process-driven incorporation and control of device services could be demonstrated.

Conclusion

The modeling of surgical procedures with BPMNSIX allows the implementation of context-sensitive surgical assistance functionalities and enables flexibility in terms of the orchestration of dynamically changing device ensembles and integration of unknown devices in the surgical workflow management.

Analysis and optimization of the robot setup for robotic-ultrasound-guided radiation therapy

Abstract

Purpose

Robotic ultrasound promises continuous, volumetric, and non-ionizing tracking of organ motion during radiation therapy. However, placement of the robot is critical because it is radio-opaque and might severely influence the achievable dose distribution.

Methods

We propose two heuristic optimization strategies for automatic placement of an ultrasound robot around a patient. Considering a kinematically redundant robot arm, we compare a generic approach based on stochastic search and a more problem-specific segmentwise construction approach. The former allows for multiple elbow configurations while the latter is deterministic. Additionally, we study different objective functions guiding the search. Our evaluation is based on data for ten actual prostate cancer cases and we compare the resulting plan quality for both methods to manually chosen robot configurations previously proposed.

Results

The mean improvements in the treatment planning objective value with respect to the best manually selected robot position and a single elbow configuration range from 8.2 to 32.8% and 8.5 to 15.5% for segmentwise construction and stochastic search, respectively. Considering three different elbow configurations, the stochastic search results in better objective values in 80% of the cases, with 30% being significantly better. The optimization strategies are robust with respect to beam sampling and transducer orientation and using previous optimization results as starting point for stochastic search typically results in better solutions compared to random starting points.

Conclusion

We propose a robust and generic optimization scheme, which can be used to optimize the robot placement for robotic ultrasound guidance in radiation therapy. The automatic optimization further mitigates the impact of robotic ultrasound on the treatment plan quality.

Four-dimensional fully convolutional residual network-based liver segmentation in Gd-EOB-DTPA-enhanced MRI

Abstract

Purpose

Gadolinium-ethoxybenzyl-diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced magnetic resonance imaging (MRI) tends to show higher diagnostic accuracy than other modalities. There is a demand for computer-assisted detection (CAD) software for Gd-EOB-DTPA-enhanced MRI. Segmentation with high accuracy is important for CAD software. We propose a liver segmentation method for Gd-EOB-DTPA-enhanced MRI that is based on a four-dimensional (4D) fully convolutional residual network (FC-ResNet). The aims of this study are to determine the best combination of an input image and output image in our proposed method and to compare our proposed method with the previous rule-based segmentation method.

Methods

We prepared a five-phase image set and a hepatobiliary phase image set as the input image sets to determine the best input image set. We also prepared a labeled liver image and labeled liver and labeled body trunk images as the output image sets to determine the best output image set. In addition, we optimized the hyperparameters of our proposed model. We used 30 cases to train our model, 10 cases to determine the hyperparameters of our model, and 20 cases to evaluate our model.

Results

Our network with the five-phase image set and the output image set of labeled liver and labeled body trunk images showed the highest accuracy. Our proposed method showed higher accuracy than the previous rule-based segmentation method. The Dice coefficient of the liver region was 0.944 ± 0.018.

Conclusion

Our proposed 4D FC-ResNet showed satisfactory performance for liver segmentation as preprocessing in CAD software.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου