Τρίτη 19 Νοεμβρίου 2019




Designing and validating a PVA liver phantom with respiratory motion for needle-based interventions

Abstract

Purpose

The purpose is to design and validate an anthropomorphic polyvinyl alcohol (PVA) liver phantom with respiratory motion to simulate needle-based interventions. Such a system can, for example, be used as a validation tool for novel needles.

Methods

Image segmentations of CT scans of four patients during inspiration and expiration were used to measure liver and rib displacement. An anthropomorphic liver mold based on a CT scan was 3D printed and filled with 5% w/w PVA-to-water, undergoing two freeze–thaw cycles, in addition to a 3D-printed compliant rib cage. They were both held in place by a PVA abdominal phantom. A sinusoidal motion vector, based on the measured liver displacement, was applied to the liver phantom by means of a motion stage. Liver, rib cage and needle deflection were tracked by placing electromagnetic sensors on the phantom. Liver and rib cage phantom motion was validated by comparison with the CT images of the patients, whereas needle deflection was compared with the literature.

Results

CT analysis showed that from the state of expiration to inspiration, the livers moved predominantly toward the right (mean: 2 mm, range: − 11 to 11 mm), anterior (mean: 15 mm, range: 9–21 mm) and caudal (mean: 16 mm, range: 6–24 mm) direction. The mechatronic design of the liver phantom gives the freedom to set direction and amplitude of the motion and was able to mimic the direction of liver motion of one patient. Needle deflection inside the phantom increased from 1.6 to 3.8 mm from the initial expiration state to inspiration.

Conclusions

The developed liver phantom allows for applying different motion patterns and shapes/sizes and thus allows for patient-specific simulation of needle-based interventions. Moreover, it is able to mimic appropriate respiratory motion and needle deflection as observed in patients.


The development of non-contact user interface of a surgical navigation system based on multi-LSTM and a phantom experiment for zygomatic implant placement

Abstract

Purpose

Image-guided surgical navigation system (SNS) has proved to be an increasingly important assistance tool for mini-invasive surgery. However, using standard devices such as keyboard and mouse as human–computer interaction (HCI) is a latent vector of infectious medium, causing risks to patients and surgeons. To solve the human–computer interaction problem, we proposed an optimized structure of LSTM based on a depth camera to recognize gestures and applied it to an in-house oral and maxillofacial surgical navigation system (Qin et al. in Int J Comput Assist Radiol Surg 14(2):281–289, 2019).

Methods

The proposed optimized structure of LSTM named multi-LSTM allows multiple input layers and takes into account the relationships between inputs. To combine the gesture recognition with the SNS, four left-hand signs waving along four directions were designed to correspond to four operations of the mouse, and the motion of right hand was used to control the movement of the cursor. Finally, a phantom study for zygomatic implant placement was conducted to evaluate the feasibility of multi-LSTM as HCI.


Results

3D hand trajectories of both wrist and elbow from 10 participants were collected to train the recognition network. Then tenfold cross-validation was performed for judging signs, and the mean accuracy was 96% ± 3%. In the phantom study, four implants were successfully placed, and the average deviations of planned–placed implants were 1.22 mm and 1.70 mm for the entry and end points, respectively, while the angular deviation ranged from 0.4° to 2.9°.

Conclusion

The results showed that this non-contact user interface based on multi-LSTM could be used as a promising tool to eliminate the disinfection problem in operation room and alleviate manipulation complexity of surgical navigation system.


Computer-assisted intra-operative verification of surgical outcome for the treatment of syndesmotic injuries through contralateral side comparison

Abstract

Purpose:

Fracture reduction and fixation of syndesmotic injuries is a common procedure in trauma surgery. An intra-operative evaluation of the surgical outcome is challenging due to high inter-individual anatomical variation. A comparison to the contralateral uninjured ankle would be highly beneficial but would also incur additional radiation and time consumption. In this work, we pioneer automatic contralateral side comparison while avoiding an additional 3D scan.

Methods:

We reconstruct an accurate 3D surface of the uninjured ankle joint from three low-dose 2D fluoroscopic projections. Through CNN complemented 3D shape model segmentation, we create a reference model of the injured ankle while addressing the issues of metal artifacts and initialization. Following 2D–3D multiple bone reconstruction, a final reference contour can be created and matched to the uninjured ankle for contralateral side comparison without any user interaction.

Results:

The accuracy and robustness of individual workflow steps were assessed using 81 C-arm datasets, with 2D and 3D images available for injured and uninjured ankles. Furthermore, the entire workflow was tested on eleven clinical cases. These experiments showed an overall average Hausdorff distance of \(2.4\pm 1.1\) mm measured at clinical evaluation level.

Conclusion:

Reference contours of the contralateral side reconstructed from three projection images can assist surgeons in optimizing reduction results, reducing the duration of radiation exposure and potentially improving postoperative outcomes in the long term.


Abdominal artery segmentation method from CT volumes using fully convolutional neural network

Abstract

Purpose 

The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy.

Method 

Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result.

Results 

The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes.

Conclusions 

We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results.


Novel evaluation of surgical activity recognition models using task-based efficiency metrics

Abstract

Purpose

Surgical task-based metrics (rather than entire procedure metrics) can be used to improve surgeon training and, ultimately, patient care through focused training interventions. Machine learning models to automatically recognize individual tasks or activities are needed to overcome the otherwise manual effort of video review. Traditionally, these models have been evaluated using frame-level accuracy. Here, we propose evaluating surgical activity recognition models by their effect on task-based efficiency metrics. In this way, we can determine when models have achieved adequate performance for providing surgeon feedback via metrics from individual tasks.

Methods

We propose a new CNN-LSTM model, RP-Net-V2, to recognize the 12 steps of robotic-assisted radical prostatectomies (RARP). We evaluated our model both in terms of conventional methods (e.g., Jaccard Index, task boundary accuracy) as well as novel ways, such as the accuracy of efficiency metrics computed from instrument movements and system events.

Results

Our proposed model achieves a Jaccard Index of 0.85 thereby outperforming previous models on RARP. Additionally, we show that metrics computed from tasks automatically identified using RP-Net-V2 correlate well with metrics from tasks labeled by clinical experts.

Conclusion

We demonstrate that metrics-based evaluation of surgical activity recognition models is a viable approach to determine when models can be used to quantify surgical efficiencies. We believe this approach and our results illustrate the potential for fully automated, postoperative efficiency reports.


HoTPiG: a novel graph-based 3-D image feature set and its applications to computer-assisted detection of cerebral aneurysms and lung nodules

Abstract

Purpose

A novel image feature set named histogram of triangular paths in graph (HoTPiG) is presented. The purpose of this study is to evaluate the feasibility of the proposed HoTPiG feature set through two clinical computer-aided detection tasks: nodule detection in lung CT images and aneurysm detection in head MR angiography images.

Methods

The HoTPiG feature set is calculated from an undirected graph structure derived from a binarized volume. The features are derived from a 3-D histogram in which each bin represents a triplet of shortest path distances between the target node and all possible node pairs near the target node. First, the vessel structure is extracted from CT/MR volumes. Then, a graph structure is extracted using an 18-neighbor rule. Using this graph, a HoTPiG feature vector is calculated at every foreground voxel. After explicit feature mapping with an exponential-χ2 kernel, each voxel is judged by a linear support vector machine classifier. The proposed method was evaluated using 300 CT and 300 MR datasets.

Results

The proposed method successfully detected lung nodules and cerebral aneurysms. The sensitivity was about 80% when the number of false positives was three per case for both applications.

Conclusions

The HoTPiG image feature set was presented, and its high general versatility was shown through two medical lesion detection applications.


A “eye-in-body” integrated surgery robot system for stereotactic surgery

Abstract

Purpose

Current stereotactic surgical robots system relies on cumbersome operations such as calibration, tracking and registration to establish the accurate intraoperative coordinate transformation chain, which makes the system not easy to use. To overcome this problem, a novel stereotactic surgical robot system has been proposed and validated.

Methods

First, a hand–eye integrated scheme is proposed to avoid the intraoperative calibration between robot arm and motion tracking system. Second, a special reference-tool-based patient registration and tracking method is developed to avoid intraoperative registration. Third, a model-free visual servo method is used to reduce the accuracy requirement of hand–eye relationship and robot kinematic model. Finally, a prototype of the system is constructed and performance tests and a pedicle screw drilling experiment are performed.

Results

The results show that the proposed system has acceptable accuracy. The target positioning error in the plane was − 0.68 ± 0.52 mm and 0.06 ± 0.41 mm. The orientation error was 0.43 ± 0.25°. The pedicle screw drilling experiment shows that the system can complete accurate stereotactic surgery.

Conclusions

The stereotactic surgical robot system described in this paper can perform stereotactic surgery without the intraoperative hand–eye calibration and nor manual registration and can achieve an acceptable position and orientation accuracy while tolerating the errors in the hand–eye coordinate transformation error and the robot kinematics model error.


GATOR: connecting integrated operating room solutions based on the IEEE 11073 SDC and ORiN standards

Abstract

Purpose

Medical device interoperability in operating rooms (OR) provides advantages for both, patients and physicians. Several approaches were made to provide standards for successful device integration. However, with high heterogeneity of standards in the market, device vendors may reject these approaches. The aim of this work is therefore to provide a proof of concept for the connection of two promising integration solutions OR.NET and SCOT to increase vendor interest.

Methods

The connection of devices between both domains is targeted by implementing an application to map device capabilities between the IEEE 11073 SDC and ORiN standards. Potential properties of the respective architectures are defined. The connection was evaluated by latency measurements in a demonstrator setup utilizing an OR light as an exemplary device.

Results

The latency measurements resulted in a similar transmission speed of the GATOR (53.0 ms) and direct SDC-to-SDC (38.0 ms) communication. Direct proprietary ORiN-to-ORiN communication was faster in any case (8.0 ms).

Conclusion

A connection between both standards was successfully achieved via the GATOR application. The results show comparable magnitudes of the communication between the standards compared to the direct standard-internal communication.


A web-based multidisciplinary team meeting visualisation system

Abstract

Purpose

Multidisciplinary team meetings (MDTs) are the standard of care for safe, effective patient management in modern hospital-based clinical practice. Medical imaging data are often the central discussion points in many MDTs, and these data are typically visualised, by all participants, on a common large display. We propose a Web-based MDT visualisation system (WMDT-VS) to allow individual participants to view the data on their own personal computing devices with the potential to customise the imaging data, i.e. different view of the data to that of the common display, for their particular clinical perspective.

Methods

We developed the WMDT-VS by leveraging the state-of-the-art Web technologies to support four MDT visualisation features: (1) 2D and 3D visualisations for multiple imaging modality data; (2) a variety of personal computing devices, e.g. smartphone, tablets, laptops and PCs, to access and navigate medical images individually and share the visualisations; (3) customised participant visualisations; and (4) the addition of extra local image data for visualisation and discussion.

Results

We outlined these MDT visualisation features on two simulated MDT settings using different imaging data and usage scenarios. We measured compatibility and performances of various personal, consumer-level, computing devices.

Conclusions

Our WMDT-VS provides a more comprehensive visualisation experience for MDT participants.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου