Access the full text.
Sign up today, get DeepDyve free for 14 days.
DE GRUYTER Current Directions in Biomedical Engineering 2022;8(1): 121-124 Abhishek Dinkar Jagtap, Mattias Heinrich, and Marian Himstedt* Automatic Generation of Synthetic Colonoscopy Videos for Domain Randomization https://doi.org/10.1515/cdbme-2022-0031 Abstract: An increasing number of colonoscopic guidance and assistance systems rely on machine learning algorithms which require a large amount of high-quality training data. In order to ensure high performance, the latter has to resemble a substantial portion of possible configurations. This particu- larly addresses varying anatomy, mucosa appearance and im- age sensor characteristics which are likely deteriorated by mo- tion blur and inadequate illumination. The limited amount of readily available training data hampers to account for all of these possible configurations which results in reduced gener- alization capabilities of machine learning models. We propose an exemplary solution for synthesizing colonoscopy videos with substantial appearance and anatomical variations which enables to learn discriminative domain-randomized represen- tations of the interior colon while mimicking real-world set- Fig. 1: Examples of synthetic colonoscopy images. tings. Keywords: Simulation, Colonoscopy, Domain randomiza- tion A common solution to this is the rendering of virtual en- doscopy (VE) videos based on CT colonography data. VE pro- 1 Introduction vides both, image sequences and ground truth poses of varying anatomy, but (without further investigation) differs substan- A common problem faced in machine learning is the lack tially from the visual appearance of real colonoscopy images. of sufficient training data. For colonoscopy the majority of This entails gaps that have to be addressed by proper domain readily-available public image data is limited to individ- adaptation methods as demonstrated in [1]. This, however, ual frames or short sequences for benchmarking CAD-based implies that synthesized images resemble colonoscopy im- polyp detection. Public colonoscopy videos of the entire colon ages (and their anatomical locations) of small datasets which structure are limited to rather low-quality capsule endoscopy likely do not generalize well to unseen or less observed colon video footage. The lack of ground truth camera poses further regions. Domain randomization, in contrast, utilizes a large hampers the training of models for applications different from amount of data which is randomly sampled over the entire con- polyp detection such as: anatomical segment classification, vi- figuration space with the variables being carefully predefined. sual place recognition (VPR), simultaneous localization and It is important to note that domain randomization is practically mapping (SLAM) and structure from motion (SfM). These ap- applicable to only simulated data as some of the parameters plications require high-quality colonoscopy videos of the en- such as textures, materials, occlusions and coat masks have to tire examinations covering all phases of the intervention. be properly controlled in a simulated environment have to be more elaborated than for generating VE images in order to en- able visual appearance close to real colonoscopy images (see Software components and video material will be made publicly available upon publication. Fig. 1). Powerful engines such as Unity have gained particu- lar interest in the computer vision and robotics communities Abhishek Dinkar Jagtap, Mattias Heinrich, Marian Himstedt, [2, 3], but have been rarely investigated in medical imaging Institute of Medical Informatics, University of Lübeck, Germany, [4, 5]. e-mail: marian.himstedt@uni-luebeck.de Open Access. © 2022 The Author(s), published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 International License. 121 A.D. Jagtap et al., Automatic Generation of Synthetic Colonoscopy Videos for Domain Randomization Fig. 2: Overview of the utilized processing pipeline for generating synthetic images. Given sufficient capabilities of simulation, models can estimate the centerline within the colon structure based on the solely be trained on domain-randomized data while still prior work of [6]. The key idea is to plan an obstacle-free (w.r.t achieving high generalization performance for inference on colon wall) path from the anus (colon entry) to the caecum. real-world test data. Since the intuitive approach based on the shortest path esti- This paper presents an exemplary implementation of do- mation tends to get too close to corners in turns, Wan et al. main randomization for colonoscopy with all required algo- propose to explicitly incorporate the inversed map of distances rithmic components (see Fig. 2). It is built up on prior work [4] to the colon wall [6] which was demonstrated to achieve op- and supplements the latter by automated domain-randomized timal results with paths being exactly centered. Subsequently video recording through following waypoints along the inte- we sample equidistant waypoints along the extracted center- rior colon’s centerline. line which will be utilized within the simulation. Currently, we manually pick start and end points of the centerline extraction which, however, could be replaced by automatic anatomical landmark prediction through heatmap regression. 2 Material and Methods 2.1 Colon segmentation 2.3 3D model preparation At first, a CT colonography (CT with radiocontrast mate- Next, the colon segmentation is imported in Blender for tex- rial) obtained from TCIA is imported in 3D Slicer for semi- ture mapping. Generally, a mesh is created surrounding the automatic colon segmentation which is carried out as follows. organ that can be edited along the vertices of the object. This A ROI around the colon is set manually with its image content mesh allows UV mapping which is a method for projecting a being thresholded. Subsequently we apply region-based seg- 3D model surface onto a 2D plane for texture mapping. An mentation on the (thresholded) mask to further delineate the UV editing tool as part of Blender offers the possibility of un- colon structure. The segmentation mask is manually curated wrapping the 3D object onto a 2D plane where textures can to ensure optimal results for successive steps. be applied seamlessly throughout the region of the colon. This texture gives a realistic pattern to the object. Default shaders in Blender enable to change material properties corresponding 2.2 Centerline extraction to colon such as surface IOR, secular tint and anisotropy to further enhance the realism. For automated image collection we require an appropriate camera path through the interior colon. For this purpose, we 122 A.D. Jagtap et al., Automatic Generation of Synthetic Colonoscopy Videos for Domain Randomization 2.4 Photorealistic rendering interactive components via executable scripts. Firstly, the sim- ulated capsule is introduced to the colon and then automati- The 3D model prepared in Blender is subsequently imported cally steered along the waypoints of the centerline (see Fig. in Unity which provides high definition render pipelines for 4). The Unity engine is setup appropriately such that it en- our simulation environment that can produce photorealistic ables smooth camera motion while following the waypoints. colonoscopy images. This virtual engine is commonly used for Our path following script consist of two parts: an initializa- game development and has drawn particular interest in com- tion function which runs all required initial setups (parameter puter vision research due to its powerful graphical simulation setup, initial capsule pose) and an update function which con- platform for generating synthetic images. Using Unity we are stantly controls the movement of the capsule (along the way- able to synthesize images where parameters such as lighting, points) and triggers actions such as changing parameters (e.g. materials, occlusions, transparency and coat mask are altered lightning, texture). All images captured by the camera of the to give it a more realistic appearance. These parameters are capsule are recorded. The parameters can either be adjusted carefully selected such that real-world characteristics are opti- on the fly allowing to capture images at the same pose with mally mimicked. As a starting base we utilize parts of the VR- varying conditions or alternatively it is possible to alter the Caps project simulating a capsule endoscopic camera within parameter set only for entire traversals. Unity also enables to Unity [4]. A 3D model of this capsule with predefined at- configure the capsule’s speed and camera’s field of view and tributes of an attached camera is placed inside the colon which targeted frame rate (FPS). is used for data collection. Adjusting these parameters is cru- cial for both mimicking real endoscopy and augmenting the data. The table below shows the camera parameters and post- 3 Results and Discussion processing effects required to achieve a fully synthetic model of the colon. For potential navigation tasks it is possible to ad- We evaluate our simulation qualitatively based on image ren- ditionally store corresponding depth images. derings for varying parameters which becomes particularly ap- parent when randomizing surface material and textural pat- Attributes Values terns. This is illustrated by Fig.3 which shows different ren- Surface Metallic 0.3 derings from the same patient’s CT and camera pose inside Surface Smoothness 0.7 the colon. Fig. 4 shows an example of an extracted centerline Lens Intensity 0.1 and generated waypoints being followed for automated video Chromic Abberation 0.5 Coat Mask 0.435 recording. For comparison, Fig. 5 shows exemplary real and Camera’s Field of View 91.375 synthetic images respectively. Focal length 159.45 ISO 200 Aperture 16 Anisotropy 1 4 Conclusion Tab. 1: Camera parameters and Post-processing Effects This paper presented a pipeline for generating synthetic colonoscopy videos that can be used to improve training of deep learning models. By controlling environment (e.g. tex- ture, reflectance) as well as virtual camera (e.g. lightning) properties we are able to simulate conditions being observed 2.5 Automated Video Rendering in inference but hardly ever presented in training data which Manually collecting data for endoscopy becomes highly time- is particularly the case for small-scale datasets. Inspired by consuming when creating synthetic datasets consisting of all substantive improvements reported on computer vision and the required variations and diversity. For domain randomiza- robotics applications and limited prior work (VR-Caps) [4] tion we need to record sequences of images each time with we motivate to utilize domain-randomized synthesization for different textures and materials which entails substantial indi- video colonoscopy. In our future work, we will incorporate this vidual setup. Thus, an approach for automating the process of additional data for training deep learning-based approaches data collection is introduced, which allows us to collect nu- to SfM, SLAM and 3D reconstruction. In order to further merous samples inside the colon with different parameters. simplify the variation in patient anatomy, we will investigate For this purpose we make use of the scripting API offered by (fully) automatic segmentation of the colon in CT scans as Unity which gives access to the simulation environment and well as an alternative to the 3D model preparation in Blender. 123 A.D. Jagtap et al., Automatic Generation of Synthetic Colonoscopy Videos for Domain Randomization Fig. 3: Synthesized, domain-randomized images captured at the same pose inside the colon. Textures are obtained as from random patterns as well as synthetic patterns mimicking mucosa appearances. References [1] S. Mathew, S. Nadeem, S. Kumari, and A. Kaufman, “Aug- menting colonoscopy using extended and directional cyclegan for lossy image translation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4696–4705. [2] S. Borkman, A. Crespi, S. Dhakad, S. Ganguly, J. Hogins, Y.- C. Jhang, M. Kamalzadeh, B. Li, S. Leal, P. Parisi et al., “Unity perception: Generate synthetic data for computer vision,” arXiv preprint arXiv:2107.04259, 2021. [3] J. Tremblay, A. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: Bridging the real- ity gap by domain randomization,” in Proceedings of the IEEE conference on computer vision and pattern recognition work- shops, 2018, pp. 969–977. [4] K. Incetan, I. O. Celik, A. Obeid, G. I. Gokceler, K. B. Ozyoruk, Y. Almalioglu, R. J. Chen, F. Mahmood, H. Gilbert, N. J. Durr, Fig. 4: Path following the centerline of the colon. The green line and M. Turan, “Vr-caps: A virtual environment for capsule visualize the path and red circles waypoints being traced by the endoscopy,” 2020. simulated capsule. [5] B. Billot, D. N. Greve, O. Puonti, A. Thielscher, K. Van Leem- put, B. Fischl, A. V. Dalca, and J. E. Iglesias, “Synthseg: Do- main randomisation for segmentation of brain mri scans of any contrast and resolution,” arXiv preprint arXiv:2107.09559, [6] M. Wan, Z. Liang, Q. Ke, L. Hong, I. Bitter, and A. Kaufman, “Automatic centerline extraction for virtual colonoscopy,” IEEE transactions on medical imaging, vol. 21, no. 12, pp. 1450– 1460, 2002. Fig. 5: Comparison of synthetic (top) and real (bottom) colonoscopy images.
Current Directions in Biomedical Engineering – de Gruyter
Published: Jul 1, 2022
Keywords: Simulation; Colonoscopy; Domain randomization
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.