Access the full text.
Sign up today, get DeepDyve free for 14 days.
G. Wei, K. Arbter, Gerd Hirzinger (1997)
Real-time visual servoing for laparoscopic surgery. Controlling robot motion with color image segmentationIEEE Engineering in Medicine and Biology Magazine, 16
C. Doignon, F. Nageotte, M. Mathelin (2004)
Detection of grey regions in color images : application to the segmentation of a surgical instrument in robotized laparoscopy2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), 4
L. Bouarfa, O. Akman, A. Schneider, P. Jonker, J. Dankelman (2012)
In-vivo real-time tracking of surgical instruments in endoscopic videoMinimally Invasive Therapy & Allied Technologies, 21
C. Kang, D. Kim, W. Lee, H. Chi (2010)
Conventional laparoscopic and robot-assisted spleen-preserving pancreatectomy: does da Vinci have clinical advantages?Surgical Endoscopy, 25
D. Bouget, Rodrigo Benenson, Mohamed Omran, L. Riffaud, B. Schiele, P. Jannin (2015)
Detecting Surgical Tools by Modelling Local Appearance and Global ShapeIEEE Transactions on Medical Imaging, 34
C. McClain, S. Soriano, L. Goumnerova, P. Black, M. Rockoff (2007)
Detection of unanticipated intracranial hemorrhage during intraoperative magnetic resonance image-guided neurosurgery. Report of two cases.Journal of neurosurgery, 106 5 Suppl
Anthony Agustinos, S. Voros (2015)
2D/3D Real-Time Tracking of Surgical Instruments Based on Endoscopic Image Processing
C. Loukas, V. Lahanas, E. Georgiou (2013)
An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation trainingThe International Journal of Medical Robotics and Computer Assisted Surgery, 9
Zijian Zhao, S. Voros, Ying Weng, F. Chang, Ruijian Li (2017)
Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based methodComputer Assisted Surgery, 22
S. McKenna, H. Charif, T. Frank (2005)
Towards Video Understanding of Laparoscopic Surgery : Instrument Tracking
D. Uecker, Yulun Wang, Cheolwhan Lee, Yulun Wang (1995)
Laboratory Investigation:Automated Instrument Tracking in Robotically Assisted Laparoscopic SurgeryJournal of Image Guided Surgery, 1
S. Payandeh (2016)
Visual Tracking in Conventional Minimally Invasive Surgery
S. D. Patel R. W. Holloway (2009)
Robotic surgery in gynecology,Scandinavian Journal of Surgery, R. W. Holloway, S. D. Patel, and S. Ahmad, “Robotic surgery in gynecology,” Scandinavian Journal of Surgery, vol. 98, no. 2, pp. 96–109, 2009. View at Publisher · View at Google Scholar · View at Scopus
Hyun-Soo Kim, Sung-Chun Bu, G. Jee, Chan Park (2003)
An Ultra-Tightly Coupled GPS/INS Integration Using Federated Kalman Filter
Christopher Sewell, D. Morris, N. Blevins, F. Barbagli, K. Salisbury (2005)
Quantifying risky behavior in surgical simulation.Studies in health technology and informatics, 111
C. Lee D. R. Uecker (1995)
Automated instrument tracking in robotically assisted laparoscopic surgery,Journal of Image Guided Surgery, D. R. Uecker, C. Lee, Y. F. Wang, and Y. Wang, “Automated instrument tracking in robotically assisted laparoscopic surgery,” Journal of Image Guided Surgery, vol. 1, no. 6, pp. 308–325, 1995. View at Publisher · View at Google Scholar
T. Blum, H. Feußner, Nassir Navab (2010)
Modeling and Segmentation of Surgical Workflow from Laparoscopic VideoMedical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention, 13 Pt 3
G. Tien, M. Atkins, Simon Fraser, B. Zheng, C. Swindells (2010)
Measuring situation awareness of surgeons in laparoscopic trainingProceedings of the 2010 Symposium on Eye-Tracking Research & Applications
A. Lehman, M. Rentschler, S. Farritor, D. Oleynikov (2007)
The current state of miniature in vivo laparoscopic roboticsJournal of Robotic Surgery, 1
C. Doignon, P. Graebling, M. Mathelin (2005)
Real-time segmentation of surgical instruments inside the abdominal cavity using a joint hue saturation color featureReal Time Imaging, 11
B. Choi, Kyungmin Jo, Songe Choi, Jaesoon Choi (2017)
Surgical-tools detection based on Convolutional Neural Network in laparoscopic robot-assisted surgery2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
Wen-Hong Huang, B. Jiann, Ying‐Huei Lee, T. Wu, Chia-Cheng Yu, J. Tsai, Chen-Pin Chou, Jong‐Khing Huang (2003)
Risk Factors of Massive Bleeding After Percutaneous Nephrolithotomy and Its Management, 14
G. Wei, K. Arbter, G. Hirzinger (1997)
Automatic tracking of laparoscopic instruments by color coding
Zhentian Zhou, Bo Wu, Juan Duan, Xu Zhang, N. Zhang, Zhiyuan Liang (2017)
Optical surgical instrument tracking system based on the principle of stereo visionJournal of Biomedical Optics, 22
A. Anand, G. Singh (2007)
Conversion In Laparoscopic Cholecystectomy: An Evaluation Study
D. Seidman, F. Nasserbakht, F. Nezhat, C. Nezhat (1996)
Delayed recognition of iliac artery injury during laparoscopic surgerySurgical Endoscopy, 10
R. Sinha, M. Sanjay, B. Rupa, Samita Kumari (2015)
Robotic surgery in gynecologyJournal of Minimal Access Surgery, 11
Hindawi Journal of Healthcare Engineering Volume 2018, Article ID 8079713, 11 pages https://doi.org/10.1155/2018/8079713 Research Article A Kalman-Filter-Based Common Algorithm Approach for Object Detection in Surgery Scene to Assist Surgeon’s Situation Awareness in Robot-Assisted Laparoscopic Surgery 1 2 3 4 Jiwon Ryu , Youngjin Moon , Jaesoon Choi , and Hee Chan Kim Department of Biomedical Engineering, Seoul National University, Seoul, Republic of Korea Biomedical Engineering Research Center, Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea Department of Biomedical Engineering, College of Medicine, Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, Republic of Korea Correspondence should be addressed to Jaesoon Choi; fides@amc.seoul.kr Received 10 October 2017; Revised 10 February 2018; Accepted 3 April 2018; Published 2 May 2018 Academic Editor: Andreas Maier Copyright © 2018 Jiwon Ryu et al. *is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon’s situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. *e proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. *e average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon’s preventive state recognition during robot-assisted surgery. surgical state are an acute hemorrhage and an instrument 1. Introduction collision [6–8]. *e use of additional monitoring techniques Surgical robot technology has become a significant en- to facilitate faster awareness of the surgical state is particularly hancement to laparoscopic surgery and ideally been suited for critical because an unexpected hemorrhage can occur due to surgeries requiring minimal invasiveness and a high degree of the collision between the instrument and organs and lead to dexterity [1–4]. However, current commercial surgical robot tissue perforation [6–8]. According to Blum et al. [9], un- systems represented by da Vinci (Intuitive Surgical, Inc., controllable bleeding from the cystic artery in cholecystec- USA) have limitations such as limited vision through a lap- tomy results in conversion from laparoscopic to open aroscope, absence of force or tactile feedback, and difficulties cholecystectomy. In addition to the complications, comput- in agile tool maneuvering and exchange during emergency erized endoscopic video analysis that provides new types of situations due to the bulky and complex configuration [5]. extra-anatomical information to surgeons can be beneficial *ese limitations cause delayed perception and troubles in for quantitative operation analysis of the surgery and the immediate responsive reconfiguration of the robot setting information archiving [4]. To contribute in endoscopic video which may result in unsafe surgery. Two of representative analysis, we propose a novel method for object labeling surgical complications influenced by limited awareness of the technique using an efficient vision-based object classification 2 Journal of Healthcare Engineering improved Kalman filter in the second stage. *us, we propose algorithm framework that can be applied to various objects in the robot-assisted endoscopic surgery videos. a surgical vision system using both the adaptive filter and the computer vision techniques to provide a reliable, fast, and Although automatic object detection in endoscopic videos or other medical vision modalities has been studied in robust detection and tracking which may enhance the ro- various applications, researches on common frameworks or bustness and the reliability of robotic laparoscopic surgery [13]. universal algorithms for concurrent detection of multiple objects are limited. Furthermore, most studies pertaining to 2. Materials and Methods the application of vision techniques have focused only on the image-based instrument tracking methods which involve 2.1. Algorithm Description. *e method to identify objects in physical modification of the instruments mostly for analysis two-dimensional laparoscopic surgery images was based on of the surgical workflow for skill evaluations during robot- local image features such as the texture and morphological assisted surgeries [10–17]. Conventional techniques re- properties of the objects. Figure 1(a) shows the concept of the garding the surgical instrument position estimations utilize proposed method. Here, we identified the three main pro- semantic information in images through segmentation cessing phases: (1) segmentation using specific texture prop- technique [10, 16–18] or physical modifications in the erties of each image frame, (2) localization of the object using surgical instruments which include attaching extra marker the correlation between image frames, and (3) optimal esti- objects at the end of the instruments or adding color patterns mation of the current object locations using the Kalman filter. at the tip of the instruments [19–22]. Recent methods in- In the first stage of the algorithm, the contrasted color clude detections through convolutional neural network signatures of the object were used to distinguish it from other (CNN) [23, 24]. Given the training data sets of the surgical environments so that the blood and the instruments could be instrument images, the CNN processes and retrieves the differentiated from other organs. Under reasonable lighting presence and the location of the surgical instruments at the conditions, the hemorrhagic blood had a distinct color sig- inference stage. However, the neural network algorithm is nature different from other body tissues and the instruments, dependent on the training dataset. Additional consider- and the surgical instruments had metallic characteristics such ations or training methods may be needed when an in- as gray color. After processing images to enhance the textures strument with different shape that was not included in the of the blood and the instruments, the canny edge detection training data or an object with shape changing over time technique and the entropy filtering were thereafter applied to such as hemorrhage is to be detected. *e proposed algo- find a closed contour and decrease the background noise. rithm utilizes only color properties of the object and can be *en, using binary edge detection, we were able to eliminate applied to both rigid and deformable object, which we expect the irrelevant image details and maintain the shape features. to provide a kind of common framework for the object Finally, the segmented boundaries of each blood region and detection and tracking in the surgical videos. the surgical instrument were marked on the original image. Tracking using the universal method proposed in this If the objects to be detected are occluded by the noise or study can also offer better outcomes in terms of process the detection was done under time-varying light conditions, performance than conventional methods when various ob- the segmentation process might fail. To give robustness to the jects should be detected during robot-assisted surgery. For detection in such situations, the template-based matching higher performance of estimating positions in noisy and algorithm that uses the average of previously identified re- occluded environment, we chose to use a federated Kalman gions for 0.5 s as a template was applied simultaneously. *is filter (FKF) [25], a filtering technique with a combination of method was strong to track the regions that might be a local and master filter, rather than using one Kalman filter. A neglected or falsely identified by methods using only local combination of the proposed detection and tracking algo- color features. It was expected that the combination of the rithms could act as a foundation for morphological change color feature localization and the template matching would tracking of various objects in laparoscopic surgery videos. yield more accurate result than the use of only the former. To *e development of a machine vision system that enables determine the best location of the test data that the samples robust segmentation and tracking of objects in laparoscopic match, a template-based approach that uses a sum-comparing images is a challenging task because the observed scenes are metric, sum-of-squared difference (SSD), represented as captured under time-varying lighting conditions and may have a moving background because of organ pulsation and d(u, v) � (f(x, y)− t(x− u, y− v)) , (1) breathing. In addition, a hemorrhage region can be tempo- rarily occluded by the surrounded organs or the surgical in- where f(·) denotes the original candidate image, t(·) de- struments so that the region could be misidentified by the notes the template image provided after local color feature surgeons. To cope with difficulties in the processing of lapa- localization, and x, y, u, and v denote position parameters roscopic images that contain high speculation noise and for the former and latter, respectively, was implemented. *e nonuniform background, the proposed universal object minimum value of the SSD which elicited to be the highest tracking algorithm was structured with two characteristic correlated position was selected as the template matching stages: (1) feature extraction and (2) object tracking. In the first algorithm in this paper. To improve the computation speed, stage, features of objects for each sequential image are extracted we employed a calculation algorithm based on the fast using texture information and similarity measure, and the Fourier transform (FFT) and downsized data by converting objects are tracked for the position optimally estimated by an the spatial domain image into frequency domain signals. Journal of Healthcare Engineering 3 Determination of object color range Segmentation Color-based object target detection Input image Intensity-based correlation between target and candidate Template matching Target localization enhancement Measurement vector State estimate and covariance Kalman filter Object (measurement update: object localization region target) (a) Federated Kalman filter Center of Segmentation Local filter 1 blood region Hemorrhage location Master filter Template Center of Local filter 2 matching blood region Template update (b) Figure 1: (a) Block diagram of the computational steps for various object detection including the structure of the proposed Kalman ‚lter for optimal estimation. (b) FKF-based hemorrhage region tracking. 2.2. Hemorrhage Recognition. In order to correctly distin- e ‚rst part of the FKF, an estimator from the local ‚lters guish a hemorrhage region from the environment during the that were applied to segmentation and template matching, segmentation process, the image contrast was enhanced using was implemented to detect and reduce tracking outliers histogram equalization in each Red Green Blue (RGB) space. during the operation. e outputs from the local ‚lters were e problem due to similar color characteristics of the then applied to the master ‚lter for tracking the ‚nal location hemorrhage and organs was resolved by increasing the color of the hemorrhage region as shown in Figure 1(b). contrast of the image. Using this contrasted image, a mutually inclusive threshold technique was implemented to identify the hemorrhage. Mutually inclusive algorithm is a semantic la- 2.3. Surgical Instrument Recognition. Before the segmentation beling technique, where di�erent threshold is applied to each process, instruments and organs were classi‚ed into di�erent RGB space, and the positively labeled pixels that overlap in all categories using k-means clustering under LAB space, where L RGB space are combined to form a target object mask image. stands for luminance and A and B for two color channels. e Using this technique, blood regions were extracted. k-means clustering technique was used to classify the di�erent At the ‚nal stage, followed by the template matching objects based on the image intensity variances. e best results process, a federated Kalman ‚lter (FKF) [25] was applied to have been provided under LAB space because of its advantages the results of the segmentation and template matching for the image sharpening. Histogram equalization applied in processes to estimate the location of the hemorrhage region. the hemorrhage detection was not needed in the surgical in- e FKF, consisting of two local ‚lters and one master ‚lter, strument detection because the instruments and the organs was a model-based method to estimate the measurement data could be easily di�erentiated by the intensity characteristics. containing noise and other inaccuracies. e ‚ltering tech- After a chain of image processing techniques, the output was nique distributed estimation problems through a local and combined with the results from the instrument motion sub- a master ‚lter and hence reduced the calculation load and traction, which is the movement computed by subtracting provided independent faulty measurement detection [25]. interframe images. Finally, each instrument was labeled. 4 Journal of Healthcare Engineering After completing the segmentation stage, the numbered 30 labels were assigned to the instruments as each was ‚rst detected. Once an instrument was identi‚ed, the center location of the instrument on the image was saved and compared with the result by the Kalman ‚lter algorithm in the ‚nal stage. e Kalman ‚lter technique was used to compensate for the identi‚cation or the tracking failures due to the oc- clusion. When the both segmentation and the template 20 40 60 80 100 120 140 160 180 200 matching failed to detect the instrument correctly, the Time (frames) Kalman ‚lter functioned as an estimator of the instrument position. e gain was continually adjusted with respect to Segmentation only the calculated Euclidean distance between the center points Proposed method of the instrument in the current and the previous image Figure 2: RMSE pro‚les of the proposed method and the seg- frames. e instruments were given their lifetime when they mentation-only method. In both cases, centroids of the manually were ‚rst detected. e continual recognition of instruments traced bleeding region were used as ground truth values. increases their lifetime by 20%. When the instruments vanished and could not be recognized, the lifetime was Since the surgical instrument size normally occupied 1/8 of decreased. e lifetime of 0% stated that the instrument did an image, the centroid distance of 1/15, approximately “not exist.” In addition, the distances of each labeled location 2.8 mm in the physical unit, was reasonable. False negative and their previous locations for the instrument were cal- (FN) refers to the missing object where the object was not culated as the Euclidean distance to ‚nd the correct label. detected, but the ground truth object was present. False e proposed FKF method provided more accurate de- positive (FP) was a false alarm where an object was detected, tection and reduced the tracking failures by continuously but no ground truth objects was present. FP was also adjusting gains of the master Kalman ‚lter built on the prior counted when the centroid distance di�erence between the knowledge acquired from each stage of segmentation and detected object and the ground truth was over 1/15 of an template matching, than without ‚ltering. image size. e distance error, root mean square error (RMSE), 3. Results between the estimated and the reference centroids for the For this study, four videos for the laparoscopic surgical hemorrhage region and the surgical instrument were operation with resolutions of 640 × 480 pixels and computed as in (3), where [x(i), y(i)] denotes the esti- 10 frames/s in MPEG format were used. Each video of mated center position, [x (i),y (i)] denotes the reference r r approximately 20 s consisted of a sequence of the di�erent center position, and N denotes the number of hemorrhage. surgical tasks under di�erent lighting conditions, and two of e blood £ow was also analyzed with the area calculated by them contained the hemorrhage events. e processing and the number of the pixels inside the boundary of the detected the analysis steps were implemented using MATLAB blood region in each frame: (MathWorks, Natick, MA, USA). e identi‚ed object re- gion was marked at its geometric center, indicating the 2 2 (3) RMSE x(i) − x (i) + y(i) − y (i) . r r median of the detected boundary pixels, and the hemorrhage i1 area was measured by counting the pixels in the entire hemorrhage region. e ground truth data sets for the hemorrhage were prepared by manually inspecting and marking the hemorrhage region, and for the surgical in- 3.1. Hemorrhage Recognition. e average RMSE values over struments, the centers were calculated by marking their the entire time period for the segmentation-only method, boundaries in each image. e ground truth data were the output of the segmentation stage where the semantic prepared by a beginner medical analyst and con‚rmed by an information of an image was used to extract the hemorrhage expert surgeon. To evaluate the performance of the proposed region, and the proposed method were 5.6 and 3.6 pixels, method, recall and precision were measured: and the accuracy were 87% and 98%. e accuracy refers to TP counting the presence of the hemorrhage frames out of total Recall , frames in a video. TP + FN Figure 2 displays the RMSE pro‚les for the proposed (2) method and the segmentation-only one for an approxi- TP Precision . mately 20 s time period. Between frame numbers 100 and TP + FP 120, the segmentation-only method was unable to detect the True positive (TP) was counted when the distance of blood region that was temporarily hidden by obstructing centroids between the detected object and the ground truth objects, resulting in maximum RMSE values. Moreover, the was less than 1/15 of an image size 640 × 480, which cor- average computation time per frame was 0.89 s, which is responded to average 42 mm × 25 mm in the physical unit. adequate for processing the hemorrhage detection in reality RMSE (pixels) Journal of Healthcare Engineering 5 (a) (b) (c) (d) Figure 3: Selected frames of hemorrhage region detection and instrument detection results. In (a), (b), and (c), the left figures display the manual detection of the hemorrhage, and the right figures display the automatic detection by the proposed method: (a) both the seg- mentation-only method (red boundary) and the proposed technique (square mark) correctly detected the hemorrhage region; (b) manually undetectable hemorrhage has been detected accurately by the proposed technique using the previous frame information, but falsely segmented by the segmentation-only method; (c) the hemorrhage was not detected by the segmentation-only method, but accurately detected by the proposed method. (d) Sample frames of multiple instrument detection. 6 Journal of Healthcare Engineering 0 20 40 60 80 100 120 140 160 180 200 Time (frame) (a) 0 20 40 60 80 100 120 140 160 180 200 Time (frame) (b) Figure 4: Hemorrhage £ow classi‚cation by area variation analysis where the red line indicates linearity of the hemorrhage £ow and the blue line indicates the calculated the hemorrhage area: (a) stagnant hemorrhage; (b) hemorrhage £ow following stanch. when the hemorrhage does not £ow massively within 10 considered as nonpersistent bleeding and thus classi‚ed as frames, as shown in Figure 2. a nonwarning target. Figure 4(b) shows an acute hemorrhage Figure 3 shows that the FKF in the proposed method that rapidly £ows around the surgical view, which was produced a more stable RMSE pro‚le even during the oc- controlled by the surgeons after the situation awareness. e clusion period. As a result, the proposed method yielded fast increase in the ‚rst 50 frames describes rapid blood £ow lower tracking errors as well as robustness to occlusion. due to an unintended agitation. e linear decrease after frame 60 shows that the hemorrhage was treated and removed by the surgical tools such as forceps and suction. As inter- 3.2. Hemorrhage Flow Analysis. Figure 4 shows the area preted in Figure 4, the calculated area analysis depicts the variation analysis that classi‚es types of the hemorrhage into surgical events that occur during the surgery. erefore, the £owing or stagnant. e overall scheme is provided by the blood-£ow pro‚le can be used in actual surgery for surgeons red line, which is plotted by calculation of peaks of area to be warned of the state of the blood £ow and control blood increase and decrease. After several frames, this operation £ow rapidly before they become aware of the situation. can specify dangerous targets by warning signs. Figure 4(a) shows a stagnant hemorrhage with a very slow increase in blood volume due to insu¦cient stanch operation. Other 3.3. Surgical Instrument Recognition. e instrument tra- than frames 40 to 80 where the blood area was reduced due jectories using the three di�erent position identi‚cation to smoke and occlusions, the overall peak area tended to methods—one with only segmentation, one with the simi- keep at its level. Due to the little variation, this can be larity measure analysis added to the former, and the Area (pixel) Area (pixel) Journal of Healthcare Engineering 7 100 150 200 250 300 350 400 450 500 y position (pixel) Segmentation Proposed method Similarity measure Ground truth (a) –20 –40 –60 –80 –100 –120 –60 –40 –20 0 20 40 60 80 100 120 y position error (pixel) + Segmentation error + Similarity measure error + Kalman filter error (b) –20 –40 –60 –80 –50 0 50 100 150 200 250 y position error (pixel) + Segmentation error + Similarity measure error + Kalman filter error (c) Figure 5: Continued. x position error (pixel) x position error (pixel) x position (pixel) 8 Journal of Healthcare Engineering –50 –100 –150 –200 –250 –300 –350 –500 –400 –300 –200 –100 0 100 200 300 y position error (pixel) + Segmentation error + Similarity measure error + Kalman filter error (d) Figure 5: Instrument tracking: (a) the proposed method validation using multiple instrument path trajectories in comparison with manually traced values, x and y pixel positions of instruments 1, 2, and 3; (b) path position distance error of instrument 1; (c) path position distance error of instrument 2; (d) path position distance error of instrument 3. proposed one with the additional Kalman ‚lter—were cal- objects. With our proposed algorithm, deformable object culated in the test video and are shown in Figure 5. As the such as the hemorrhage could also be detected because we ‚gure depicts, the proposed method was the closest to the consider texture rather than the shape in object detection. identi‚ed ground truth value. e proposed method out- us, we developed a novel algorithm to detect and track performed the other two methods which contained noise both the hemorrhage and the surgical instruments in lap- peaks and failed to accurately detect the instrument locations. aroscopic video images taken during real robot-assisted e result of the proposed method was also quanti‚ed surgery. By extending our method, extra information and through the recall and precision measurements, where each control such as organ tracking could also be obtained. surgical instrument detection rate was 96% and 86% in the e performance of the algorithm was evaluated with the four robot-assisted laparoscopic videos. Additionally, the actual video data from a surgery. For the hemorrhage de- RMSE analysis for the instrument 1, 2, and 3 resulted in 39, tection, the segmentation-only method provided the average 15, and 74 pixels, respectively. e average RMSE of 42 pixels RMSE values of 5.6 pixels, which is about 1.5% of the image was approximately 1/16 of the image size, about 2.6 mm in the size. e percentage means that the distance error was physical unit, which was su¦cient for the instrument local- considerably small. However, this error was further reduced ization where the instrument usually took 1/8 of the image to 3.6 pixels when the proposed method was applied. e size. e instrument tracking system on a laparoscopic sur- measurement of £ow of the hemorrhage also demonstrated gery video was implemented as shown in Figure 3(d). its e�ectiveness by depicting the surgical situations, which displayed a linear increase when blood was £owing and 4. Discussion a decrease when the hemorrhage was stanched. From our inferenced surgical videos, the average computation time Surgeons often face di¦culty in environmental perception was approximately 1.1 Hz, which is su¦cient for the hem- during robot-assisted surgery, and this may indirectly lead to orrhage detection and the fast state recognition since the delayed maneuvering of the tools [8]. Despite the impor- blood does not £ow massively within 1 s. e process time tance of the timely detection and the management of the can also be decreased through an improvement in software incidental hemorrhage, the automatic recognition and the programming and hardware with higher computation speed. localization of objects during laparoscopic surgery have not In addition to the hemorrhage detection, the surgical yet been widely and fully studied. Most existing object instrument recognition made simultaneous multiple in- recognition methods work only with the de‚ned objects. To strument tracking without additional hardware feasible and elaborate, the algorithms that are studied to detect the resulted in the recognition rate of over 80%. Due to test surgical instruments do not normally work for the hem- dataset di�erences, recent works with the neural network orrhage detection. For example, the neural network algo- rithm would work only on the de‚ned objects that have been instrument detection [24] and our algorithm cannot directly trained, and the conventional techniques that were used for be compared. However, our algorithm showed higher ex- the surgical instrument detection focus only on the rigid tension of implementation and higher precision than that of x position error (pixel) Journal of Healthcare Engineering 9 0 50 100 150 200 250 Time (frame) + Segmentation Proposed method + Similarity measure Ground truth 0 50 100 150 200 250 Time (frame) + Segmentation Proposed method + Similarity measure Ground truth (a) 0 50 100 150 200 250 Time (frame) + Segmentation Proposed method + Similarity measure Ground truth 0 50 100 150 200 250 Time (frame) + Segmentation Proposed method + Similarity measure Ground truth (b) Figure 6: Continued. Instrument 2 y position (pixel) Instrument 2 x position (pixel) Instrument 1 y position (pixel) Instrument 1 x position (pixel) 10 Journal of Healthcare Engineering 0 50 100 150 200 250 Time (frame) + Segmentation Proposed method + Similarity measure Ground truth 0 50 100 150 200 250 Time (frame) + Segmentation Proposed method + Similarity measure Ground truth (c) Figure 6: Validation of the surgical instrument position trajectories in time domain. e plot shows x and y position of the (a) instrument 1, (b) instrument 2, and (c) instrument 3 as reported by manually traced values (dashed line) and the proposed tracking algorithm (solid line). Choi et al. [24] which resulted in mean average precision hemorrhage £ow occurs out of sight due to the movement of (mAP) of 72.26% and was limited to 8 instruments. One the camera or if the surgical instruments are about to collide. limitation in the current implementation is the dependency Also, using the concept of localizing the center points of the of segmentation accuracy on instrument color and surface object, the endoscopic camera may be able to automatically texture characteristics. As depicted in Figures 5(c) and 6(c), reach the focused surgical site. the irrigation instrument with holes on the surface is prone With the proposed method, additional functionality to to relatively higher identi‚cation error; however, it might be increase the safety of robotic control and the surgical pro- less likely to cause accidental injuries such as grabbing or cedure was implemented without additional hardware such cutting tissues, compared to other instruments. as extra arti‚cial markers, specialized instruments, or sep- Overall, the proposed method provided more accurate arated cameras or detectors. Warnings of the surgical detection results using mathematical ‚lters built on unsafety through the automatic detection of bleeding and knowledge acquired from previous object locations. It instrument positions will provide useful information to outperformed the segmentation-only technique, which fails surgeons so that they can perform safer surgeries and reduce to accurately detect bleeding and instrument locations overall surgery time. is method will also be extended to mainly because of environmental distortions such as smoke, other surgical state recognition applications in which spatial camera motion, or organ occlusion. e analytic tools such and temporal accuracies of feature location are critical. as RMSE and area measurement showed that the proposed method can provide surgeons with information about object Conflicts of Interest movement and unsafe situations that may occur during surgery. e authors declare that there are no con£icts of interest regarding the publication of this article. 5. Conclusion In a complicated environment such as that in a robot- Acknowledgments assisted laparoscopic surgery, in which the surgeon expe- riences limited vision through a laparoscope, the proposed is study was supported by the Future Industry Leading automatic object recognition function will help surgeons Technology Development Program, 10053260 (Virtual rapidly handle emergency situations through fast robot arm Surgery Simulator Technology Development for Medical control. e system using the method can be used as Training), funded by the Ministry of Trade, Industry and a concept to further extend to warn the surgeon if the Energy, Republic of Korea. Instrument 3 y position (pixel) Instrument 3 x position (pixel) Journal of Healthcare Engineering 11 [16] A. Agustinos and S. Voros, “2D/3D real-time tracking of References surgical instruments based on endoscopic image processing,” [1] C. M. Kang, D. H. Kim, W. J. Lee, and H. S. Chi, “Con- in Computer-Assisted and Robotic Endoscopy (CARE 2015), ventional laparoscopic and robot-assisted spleen-preserving X. Luo, T. Reichl, A. Reiter, and G. L. Mariottini, Eds., pancreatectomy: does da Vinci have clinical advantages?,” vol. 9515 of Lecture Notes in Computer Science, Springer, Surgical Endoscopy, vol. 25, no. 6, pp. 2004–2009, 2011. Cham, Switzerland, 2016. [2] R. W. Holloway, S. D. Patel, and S. Ahmad, “Robotic surgery [17] S. Payandeh, Visual Tracking in Conventional Minimally in gynecology,” Scandinavian Journal of Surgery, vol. 98, no. 2, Invasive Surgery, CRC Press, Cleveland, OH, USA, 2016. pp. 96–109, 2009. [18] D. Bouget, R. Benenson, M. Omran, L. Riffaud, B. Schiele, and [3] D. R. Uecker, C. Lee, Y. F. Wang, and Y. Wang, “Automated P. Jannin, “Detecting surgical tools by modelling local ap- instrument tracking in robotically assisted laparoscopic sur- pearance and global shape,” IEEE Transactions on Medical Imaging, vol. 34, no. 12, pp. 2603–2617, 2015. gery,” Journal of Image Guided Surgery, vol. 1, no. 6, [19] Z. Zhou, B. Wu, J. Duan, X. Zhang, N. Zhang, and Z. Liang, pp. 308–325, 1995. “Optical surgical instrument tracking system based on the [4] S. J. McKenna, H. Nait Charif, and T. Frank, Towards Video principle of stereo vision,” Journal of Biomedical Optics, Understanding of Laparoscopic Surgery: Instrument Tracking, vol. 22, no. 6, p. 065005, 2017. IVCNZ, New Zealand, 2005. [20] L. Bouarfa, O. Akman, A. Schneider, P. Jonker, and [5] A. C. Lehman, M. E. Rentschler, S. M. Farritor, and J. Dankelman, “In-vivo real-time tracking of surgical in- D. Oleynikov, “*e current state of miniature in vivo lapa- struments in endoscopic video,” Minimally Invasive >erapy roscopic robotics,” Journal of Robotic Surgery, vol. 1, no. 1, & Allied Technologies, vol. 21, no. 3, pp. 129–134, 2012. pp. 45–49, 2007. [21] G. Wei, K. Arbter, and G. Hirzinger, “Real-time visual ser- [6] D. S. Seidman, F. Nasserbakht, F. Nezhat, and C. Nezhat, voing for laparoscopic surgery: controlling robot motion with “Delayed recognition of iliac artery injury during laparoscopic color image segmentation,” IEEE Engineering in Medicine and surgery,” Surgical Endoscopy, vol. 10, no. 11, pp. 1099–1101, Biology Magazine, vol. 16, no. 1, pp. 40–45, 1997. [22] C. Loukas, V. Lahanas, and E. Georgiou, “An integrated [7] C. D. McClain, S. G. Soriano, L. C. Goumnerova, P. M. Black, approach to endoscopic instrument tracking for augmented and M. A. Rockoff, “Detection of unanticipated intracranial reality applications in surgical simulation training,” In- hemorrhage during intraoperative magnetic resonance ternational Journal of Medical Robotics and Computer Assisted image–guided neurosurgery,” Journal of Neurosurgery: Pe- Surgery, vol. 9, no. 4, pp. e34–e51, 2013. diatrics, vol. 106, no. 5, pp. 398–400, 2007. [23] Z. Zhao, S. Voros, Y. Weng, F. Chang, and R. Li, “Tracking- [8] W. H. Huang, B. P. Jiann, Y. H. Lee et al., “Risk factors of by-detection of surgical instruments in minimally invasive massive bleeding after percutaneous nephrolithotomy and its surgery via the convolutional neural network deep learning- management,” JTUA, vol. 14, pp. 65–71, 2003. based method,” Computer Assisted Surgery, vol. 22, no. 1, [9] T. Blum, H. Feuner, and N. Navab, “Modeling and seg- pp. 26–35, 2017. mentation of surgical workflow from laparoscopic video,” in [24] B. Choi, K. Jo, S. Choi, and J. Choi, “Surgical-tools detection Proceedings of the 13th International Conference (MICCAI), based on Convolutional Neural Network in laparoscopic pp. 400–407, Beijing, China, September 2010. robot-assisted surgery,” in Proceedings of the 39th Annual [10] C. Doignon, P. Graebling, and M. de Mathelin, “Real time International Conference of the IEEE Engineering in Medicine segmentation of surgical instruments inside the abdominal and Biology Society (EMBC), pp. 1756–1759, Seogwipo, cavity using a joint hue saturation color feature,” Real-Time Republic of Korea, July 2017. Imaging, vol. 11, no. 5-6, pp. 429–442, 2005. [25] H. S. Kim, S. C. Bu, G. I. Jee, and C. G. Park, “An ultra-tightly [11] G. Q. Wei, K. Arbter, and G. Hirzinger, “Automatic tracking coupled GPS/INS integration using federated Kalman filter,” of laparoscopic instruments by color coding,” in Proceedings in Proceedings of the 16th International Technical Meeting of of the CVRMed/MRCAS’97: First Joint Conference Computer the Satellite Division of the Institute of Navigation (ION GPS/ Vision, Virtual Reality and Robotics in Medicine and Medial GNSS), pp. 2989–2995, Portland, OR, USA, September 2003. Robotics and Computer-Assisted Surgery, pp. 357–366, Grenoble, France, March 1997. [12] C. Doignon, F. Nageotte, and M. de Mathelin, “Detection of grey regions in color images: application to the segmentation of a surgical instrument in robotized laparoscopy,” in Pro- ceedings of the IEEE/RSJ International Conference on In- telligent Robotics and Systems, pp. 3394–3399, Sendai, Japan, [13] G. Tien, M. S. Atkins, B. Zheng, and C. Swindells, “Measuring situation awareness of surgeons in laparoscopic training,” in Proceedings of the Symposium on Eye-Tracking Research & Applications (ETRA’10), pp. 149–152, Austin, TX, USA, April [14] C. Sewell, D. Morris, N. Blevins, F. Barbagli, and K. Salisbury, “Quantifying risky behaviour in surgical simulation,” Studies in Health Technology and Informatics, vol. 111, pp. 451–457, [15] A. Anand, B. S. Pathania, and G. Singh, “Conversion in laparoscopic cholecystectomy: an evaluation study,” JK Science, vol. 9, pp. 171–174, 2007. International Journal of Advances in Rotating Machinery Multimedia Journal of The Scientific Journal of Engineering World Journal Sensors Hindawi Hindawi Publishing Corporation Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 http://www www.hindawi.com .hindawi.com V Volume 2018 olume 2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Journal of Control Science and Engineering Advances in Civil Engineering Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 Submit your manuscripts at www.hindawi.com Journal of Journal of Electrical and Computer Robotics Engineering Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 VLSI Design Advances in OptoElectronics International Journal of Modelling & Aerospace International Journal of Simulation Navigation and in Engineering Engineering Observation Hindawi Hindawi Hindawi Hindawi Volume 2018 Volume 2018 Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com www.hindawi.com www.hindawi.com Volume 2018 International Journal of Active and Passive International Journal of Antennas and Advances in Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration Hindawi Hindawi Hindawi Hindawi Hindawi www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of Healthcare Engineering – Hindawi Publishing Corporation
Published: May 2, 2018
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.