Access the full text.
Sign up today, get DeepDyve free for 14 days.
DE GRUYTER Current Directions in Biomedical Engineering 2022;8(1): 30-33 Markus Philipp, Anna Alperovich, Alexander Lisogorov, Marielena Gutt-Will, Andrea Mathis, Stefan Saur, Andreas Raabe, Franziska Mathis-Ullrich Annotation-efficient learning of surgical instrument activity in neurosurgery https://doi.org/10.1515/cdbme-2022-0008 Abstract: Machine learning-based solutions rely heavily on the quality and quantity of the training data. In the medical domain, the main challenge is to acquire rich and diverse annotated datasets for training. We propose to decrease the annotation efforts and further diversify the dataset by introducing an annotation-efficient learning workflow. Instead of costly pixel-level annotation, we require only image-level labels as the remainder is covered by simulation. Thus, we obtain a large-scale dataset with realistic images and accurate ground truth annotations. We use this dataset for the Figure 1: (a) A neurosurgical scene (left) with surgical instrument instrument localization activity task together with a student- activity as yellow overlay (right). (b) Bounding box annotation teacher approach. We demonstrate the benefits of our for the same scene (top) and post-processing to obtain workflow compared to state-of-the-art methods in instrument surgical activity labels (bottom). localization that are trained only on clinical datasets, which are activity in neurosurgical microscope video data, see Fig. 1 (a), fully annotated by human experts. which is a cornerstone towards computer-assisted surgery. To Keywords: Annotation-efficiency learning, neurosurgery, train deep learning models in our prior work [2], annotators instrument localization, medical deep learning manually labelled instrument tips with bounding boxes, which we required to compute instrument activity labels, see Fig. 1 (b). Creating a medium-sized annotated dataset took hundreds of hours and many annotation rounds. To create a large-scale 1 Introduction dataset, we need even more time and human effort. In this work, we investigate annotation-efficient learning to save The lack of large, annotated data is one of the main annotation labour for future similar problems. challenges in medical deep learning. This stems from the fact Contributions. We propose an annotation-efficient that the creation of such datasets is constrained by cost- and learning workflow for surgical instrument activity time-intensive annotations, which often require medical localization. We abstain from costly pixel-level bounding box expertise. Annotations are especially expensive if they are on annotations and resort to cheaper image-level labels, which a pixel-wise level, such as segmentation or bounding boxes. merely require annotators to decide if an instrument is present To address the annotated data constraint, annotation-efficient in a current frame or not. Based on these image-level learning became a relevant issue in medical deep learning [1]. annotations, we create a hybrid-synthetic data domain, where We focus on the problem of localizing surgical instrument we can automatically compute instrument activity labels. In this way, we combine the advantage of human-made image- level annotations and machine-made pixel-level annotations. ______ This approach speeds up the annotation process and diversifies *Corresponding author: F. Mathis-Ullrich: Health Robotics and the dataset with more instrument shapes and positions. Then, Automation (IAR-HERA), Karlsruhe Institute of Technology (KIT), Karlsruhe, DE, e-mail: franziska.ullrich@kit.edu we formulate a student-teacher approach to learn instrument M. Philipp: Health Robotics and Automation (IAR-HERA), KIT, activity localization, where we use our hybrid-synthetic data Karlsruhe, DE & Carl Zeiss Meditec AG, Oberkochen, DE domain as a proxy to guide the student. While we achieve A. Alperovich: Carl Zeiss AG, Oberkochen, DE competitive results compared to the model trained on the A. Lisogorov, S. Saur: Carl Zeiss Meditec AG, Oberkochen, DE dataset based on costly manual bounding box annotations, our M. Gutt-Will, A. Mathis, A. Raabe: University Hospital Bern, CH approach saves ~75% of the annotation work. Open Access. © 2022 The Author(s), published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 International License. 30 Figure 2: (a) Annotators classify if clinical video frames contain a surgical instrument or not (image-level annotation). For better visibility, we mark instrument tips with a white arrow. (b) Frames without instruments are used to create hybrid-synthetic data. We overlay synthetic instruments as foreground and compute the according label Q*. (c) The student-teacher learning incorporates annotated hybrid-synthetic data and unlabeled clinical data. To train 𝜃𝜃 in a supervised fashion as in [1], one needs { } clinical training data 𝐼𝐼 ,𝑄𝑄 with 𝐼𝐼 ∈ 𝒟𝒟 , that 1.1 Related work 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 consist of images 𝐼𝐼 with corresponding reference labels 𝑄𝑄 . To Current approaches to surgical instrument localization address create 𝑄𝑄 , bounding box annotations are needed (Fig. 1). annotation efficiency in different ways: [3] boost instrument Our method avoids the need for manually labelling segmentation through a self-supervised pre-training on bounding boxes. Instead, we use cheaper image-level unlabelled surgical data. [4] follow a different approach and annotations created based on the question of whether apply weak supervision to simplify the annotation labour from annotators see surgical instruments in the frame. We employ segmentation level to stripe level. Image-to-image techniques these image-level annotations to design a hybrid-synthetic are leveraged in [5] for style transfer between labelled and domain 𝒟𝒟 which we define such that we can ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 unlabelled datasets. On the other hand, [6] use domain automatically compute 𝑄𝑄 . This allows to leverage the benefits adaption to combine rendered, synthetic laparoscopic data [7] of human-made manual image-level annotations and machine- with unlabelled clinical data. However, applying such an made pixel-level annotations. Finally, we take labelled data approach to the neurosurgical domain is currently impossible from 𝒟𝒟 and unlabelled data from 𝒟𝒟 to learn ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 since no synthetic dataset as [7] is available. instrument localization based on a student-teacher approach. We build upon [6] and introduce a hybrid-synthetic data In summarizing, our method consists of three steps: (a) domain. We refer to hybrid-synthetic data as a mixture of real- Conduct image-level annotations, (b) based on them, create world clinical background images and synthetic instruments hybrid-synthetic data, (c) train a model 𝜃𝜃 using a student- overlaid as foreground. Hybrid-synthetic data tackles various teacher approach. We give an overview of our method in Fig. challenges of purely synthetic data: (1) no complex surgical 2 and describe its steps in more detail in the following sections. scene/anatomy modelling is required, (2) high variability can be achieved easily by exchanging the background, (3) realistic 2.1 Image-level annotations appearance due to real-world clinical backgrounds; thus, smaller domain gap to the real-world clinical test domain. For our image-level annotations, annotators classify if the surgical instruments are present in a frame or not. Our observations show that such image-level annotations take only 2 Method approx. 25% of the time required for bounding box annotations. We assume to have such image-level annotations We consider the problem of predicting surgical instrument { } for a set of real-world clinical images 𝐼𝐼 ⊆ 𝒟𝒟 . 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 activity as a 16 x 9 saliency map 𝑄𝑄 = (𝑝𝑝 ), where 𝑝𝑝 𝑖𝑖 ,𝑗𝑗 𝑖𝑖 ,𝑗𝑗 Based on this presence/absence annotation, we divide describes the probability for an instrument tip in the image { } { } { } 𝐼𝐼 into two subsets, 𝐼𝐼 and 𝐼𝐼 . 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 𝑝𝑝 𝑦𝑦 𝑝𝑝𝑝𝑝𝑝𝑝 𝑐𝑐𝑐𝑐 𝑝𝑝 𝑦𝑦𝑝𝑝𝑐𝑐𝑝𝑝𝑐𝑐𝑝𝑝𝑐𝑐 region (i,j), see Fig. 1 (a). Our goal is to train a model 𝜃𝜃 that can infer instrument activity 𝑄𝑄 for an image input 𝐼𝐼 . Thereby, 𝐼𝐼 comes from a real-world clinical domain, 𝒟𝒟 . 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 31 ∗ Fig 3: (a) We show different samples of our hybrid-synthetic dataset. Upper row: generated images, lower row: labels 𝑸𝑸 . (b) We show different samples from the clinical data. Comparing (a) and (b) confirms the realism of our hybrid-synthetic data. (a) Consistency learning: The goal of the consistency loss is to 2.2 Hybrid-synthetic neurosurgical data make student familiar with variations in the clinical domain Our goal is to synthesize neurosurgical training data, where we 𝒟𝒟 which are simulated by data augmentations. The same 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 can obtain saliency labels 𝑄𝑄 at no additional annotation cost. ̃ ̃ image 𝐼𝐼 is pixel-wise perturbated (see Fig. 2 (c)) to obtain 𝐼𝐼 . 𝐼𝐼 We achieve this by creating a hybrid-synthetic data domain is given as input to the student. Since pixel-wise perturbations 𝒟𝒟 . Based on image-level labels, we design 𝒟𝒟 such ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 have no effect on 𝑄𝑄 , the student’s output 𝑄𝑄 should 𝑝𝑝 𝑦𝑦 𝑝𝑝 𝑦𝑦 −𝑝𝑝 𝑠𝑠𝑠𝑠 𝑦𝑦𝑝𝑝𝑐𝑐 𝑠𝑠 that we can automatically compute saliency labels 𝑄𝑄 . match 𝑄𝑄 . This is enforced by a consistency loss: 𝑝𝑝 𝑦𝑦 𝑝𝑝 𝑦𝑦 −𝑠𝑠𝑝𝑝𝑐𝑐𝑐𝑐 ℎ𝑝𝑝𝑦𝑦 We create hybrid-synthetic neurosurgical data by using our ℒ = �𝑄𝑄 −𝑄𝑄 � (1) 𝑐𝑐𝑐𝑐𝑐𝑐 𝑠𝑠𝑝𝑝𝑖𝑖𝑝𝑝𝑝𝑝𝑐𝑐𝑐𝑐𝑦𝑦 𝑝𝑝 𝑝𝑝𝑦𝑦 𝑦𝑦 −𝑝𝑝 𝑠𝑠𝑦𝑦𝑝𝑝𝑠𝑠𝑠𝑠𝑐𝑐 𝑝𝑝 𝑝𝑝𝑦𝑦 𝑦𝑦 −𝑠𝑠𝑝𝑝𝑐𝑐𝑐𝑐 ℎ𝑝𝑝𝑦𝑦 framework presented in [8]. It uses the 3D animation software (b) Supervised task learning: To focus the student on the Blender to render neurosurgical scenes in two steps: (1) instrument activity localization task, the student is trained on Generate a random geometric constellation of neurosurgical { } labelled hybrid-synthetic data 𝐼𝐼 ,𝑄𝑄 with: ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 instruments, (2) underly single neurosurgical microscope ℒ = �𝑄𝑄 −𝑄𝑄 � , (2) 𝑝𝑝 𝑠𝑠 𝑝𝑝𝑝𝑝𝑦𝑦 𝑠𝑠 𝑝𝑝𝑖𝑖𝑝𝑝𝑦𝑦 𝑝𝑝 𝑝𝑝𝑦𝑦 𝑦𝑦 −𝑝𝑝 𝑠𝑠𝑦𝑦𝑝𝑝𝑠𝑠𝑠𝑠𝑐𝑐 images as a background. Fig. 2 (b) illustrates the creation of whereas 𝑄𝑄 is the student’s prediction. The student’s 𝑝𝑝 𝑦𝑦 𝑝𝑝 𝑦𝑦 −𝑝𝑝 𝑠𝑠𝑠𝑠 𝑦𝑦 such hybrid-synthetic data. For details, see [8]. weights 𝜃𝜃 are updated based on combined loss as in [6], 𝑝𝑝 𝑠𝑠𝑠𝑠 𝑦𝑦 We employ the image-level annotation to design 𝒟𝒟 ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 ℒ = 𝛼𝛼 (𝑡𝑡 )∗ℒ + ℒ , (3) 𝑝𝑝 𝑠𝑠𝑦𝑦𝑝𝑝𝑠𝑠𝑠𝑠𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐 𝑠𝑠𝑝𝑝𝑖𝑖𝑝𝑝𝑝𝑝𝑐𝑐𝑐𝑐𝑦𝑦 𝑝𝑝 𝑝𝑝𝑠𝑠𝑝𝑝𝑠𝑠𝑦𝑦 𝑝𝑝𝑖𝑖𝑝𝑝𝑦𝑦 such that we can automatically compute 𝑄𝑄 : As background with a weighting factor 𝛼𝛼 (𝑡𝑡 ). The weighting factor is increased { } images, we take images from 𝐼𝐼 , as they do not contain 𝑦𝑦𝑝𝑝𝑐𝑐𝑝𝑝𝑐𝑐𝑝𝑝𝑐𝑐 throughout the training to shift the student’s focus from surgical instruments already. When adding synthetic 𝒟𝒟 to 𝒟𝒟 . We use the same loss function for the ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 { } instruments on-top of 𝐼𝐼 images, the synthetic 𝑦𝑦𝑝𝑝𝑐𝑐𝑝𝑝𝑐𝑐𝑝𝑝𝑐𝑐 supervised training as for the consistency training to allow a instruments are the only instruments in the rendered image. smooth transition between the task with increasing 𝛼𝛼 (𝑡𝑡 ). Consequently, we can automatically compute labels 𝑄𝑄 from Step 2. We update the teacher by an exponential filter as [6]: the position of the synthetic instruments in Blender. We render 𝜃𝜃 = 0.95∗𝜃𝜃 + 0.05∗𝜃𝜃 (4) 𝑠𝑠𝑝𝑝𝑐𝑐𝑐𝑐 ℎ𝑝𝑝𝑦𝑦 𝑠𝑠𝑝𝑝𝑐𝑐𝑐𝑐 ℎ𝑝𝑝𝑦𝑦 𝑝𝑝 𝑠𝑠𝑠𝑠 𝑦𝑦 { } a dataset 𝐼𝐼 ,𝑄𝑄 and show samples from it in Fig. 3 (a). ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 By repeating this two-step cycle, the student benefits from an We provide reference of real clinical images in Fig. 3 (b). improved teacher due to better pseudo-labels. 2.3 Student-Teacher-Learning 3 Experiments Our goal is to train a model 𝜃𝜃 for the clinical domain 𝒟𝒟 . 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 We describe our experimental setup. Then, we explain Building upon [6], we formulate a student-teacher task for the baselines that serve as a comparison to our method. regression problem of instrument activity localization. Our Data. We use the NeuroSurg dataset introduced in [2], for approach combines supervised training on the labelled hybrid- which both instrument presence/absence labels and instrument synthetic domain 𝒟𝒟 and domain adaption to the ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 activity labels are available. We ignore the instrument activity unlabelled clinical domain 𝒟𝒟 through consistency 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 labels for training our method and only rely on the instrument learning. By this way, two networks - a student network presence/absence labels. We consider six neurosurgical cases 𝜃𝜃 and a teacher network 𝜃𝜃 with identical 𝑝𝑝 𝑠𝑠𝑠𝑠 𝑦𝑦𝑝𝑝𝑐𝑐 𝑠𝑠 𝑠𝑠𝑝𝑝𝑐𝑐𝑐𝑐 ℎ𝑝𝑝𝑦𝑦 for training purposes and test on the six independent cases [2]. architecture - interact in a two-step cycle: Evaluation. We use the SIM metric, which is a standard metric Step 1. The teacher network receives an input image 𝐼𝐼 sampled in the saliency literature: 𝑆𝑆 𝐼𝐼 𝑆𝑆 = ∑ min�𝑄𝑄 ,𝑄𝑄 �, whereas 𝑝𝑝 𝑦𝑦 𝑝𝑝 𝑦𝑦 { } from 𝐼𝐼 to predict 𝑄𝑄 . As the true label 𝑄𝑄 is 𝑐𝑐𝑐𝑐 𝑖𝑖 𝑐𝑐 𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐 𝑝𝑝 𝑦𝑦 𝑝𝑝 𝑦𝑦 −𝑠𝑠𝑝𝑝𝑐𝑐𝑐𝑐 ℎ𝑝𝑝𝑦𝑦 ∑𝑄𝑄 = ∑𝑄𝑄 = 1. Hybrid-synthetic data. Based on the 𝑝𝑝 𝑦𝑦 𝑝𝑝 𝑦𝑦 unknown, 𝑄𝑄 serves as a pseudo-label for the 𝑝𝑝 𝑦𝑦 𝑝𝑝 𝑦𝑦 −𝑠𝑠𝑝𝑝𝑐𝑐𝑐𝑐 ℎ𝑝𝑝𝑦𝑦 image-level annotations for the six training surgeries, we student. The student is trained on two tasks simultaneously: generate a dataset {𝐼𝐼 ,𝑄𝑄 } with 20.000 training images. ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 32 Table 1: We test our method and the baselines on the six test cases (Case No. 1 – 6) from NeuroSurg. We report mean SIM values and standard deviations for the six test cases and mark the best mean SIM values per case bold. Case No. 1 Case No. 2 Case No. 3 Case No. 4 Case No. 5 Case No. 6 Our method 0.76±0.14 0.75±0.17 0.75±0.16 0.67±0.20 0.77±0.12 0.67±0.18 Clinical6-FS 0.83±0.11 0.81±0.15 0.78±0.12 0.72±0.18 0.78±0.12 0.72±0.13 Clinical2-FS 0.72±0.17 0.67±0.19 0.68±0.16 0.63±0.20 0.71±0.15 0.68±0.16 Hybrid-FS 0.67±0.22 0.69±0.25 0.71±0.21 0.59±0.26 0.73±0.17 0.58±0.24 Student-Teacher-Learning implementation. We use the CNN learning approach for annotation-efficient learning of surgical architecture from [2] and re-implement the perturbations as in instrument activity. Our approach replaces effort- and cost- [6]. Initial learning rate lr=0.01 (reduced by 0.5 every 50 intensive bounding box annotations with simpler and cheaper epochs), no. of epochs = 300, batch size = 25, 𝛼𝛼 (𝑡𝑡 ) = {0 for image-level annotations. We demonstrate how to generate a t≤10, lin. increase to 1 for t∈[11,50], 1 for t≥50}. realistically looking large-scale synthetic dataset for training Baseline. We investigate the benefit from annotating the by successfully combining human-made and machine-made training data with bounding boxes as in Fig. 1. Also, we annotations. While we achieve a competitive performance explore the advantages of the student-teacher approach in compared to state-of-the-art supervised learning based on contrast to mere supervised training on the hybrid-synthetic bounding box annotations, we save up to 75% annotation data {𝐼𝐼 ,𝑄𝑄 } . We compare our method with several effort. ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 baselines: (1) We use fully supervised model from [2] which we refer to as Clinical6-FS. (2) To investigate the effect of Author Statement training dataset size, we train a supervised baseline, Clinical2- Research funding: The author state no funding is involved. FS, on only two training cases of NeuroSurg (~31% of the Conflict of interest: Authors state no conflict of interest. training data of Clinical6-FS). (3) We train another baseline Informed consent: Informed consent has been obtained from Hybrid-FS on {𝐼𝐼 ,𝑄𝑄 } using the conditions as in [2]. all individuals included in this study. Ethical approval: The ℎ𝑦𝑦𝑦𝑦𝑦𝑦 𝑖𝑖 𝑦𝑦 research related to human use complies with all the relevant national regulations, institutional policies and was performed 4 Results in accordance with the tenets of the Helsinki Declaration. We compare the performance of our annotation-efficient learning method with the two baseline methods in Tab. 1. References Our annotation-efficient learning method achieves a [1] Tajbakhsh, N. et al. (2021). Guest Editorial Annotation- competitive performance to the baseline Clinical6-FS. It Efficient Deep Learning: The Holy Grail of Medical Imaging. performs close to on-par on some test cases and slightly worse IEEE T-MI, 40(10), 2526-2533. on the remaining cases. Our method even outperforms the [2] Philipp, M. et al. (2021). Localizing Neurosurgical Instruments Across Domains and in the Wild. MIDL 2021. supervised baseline Clinical2-FS on five of six test cases. [3] Ross, T. et al. (2018). Exploiting the potential of unlabeled Now we compare Hybrid-FS and Clinical6-FS. Although endoscopic video data with self-supervised learning. Int J Hybrid-FS never saw real instruments – only real clinical CARS, 13, 925-933. backgrounds - during training, it continuously achieves > 80% [4] Fuentes-Hurtado, F. et al. (2019). EasyLabels: weak labels for scene segmentation in laparoscopic videos. Int J CARS, 14, of performance of Clinical6-FS. This supports our claim that 1247-1257. our hybrid-synthetic data are highly realistic. [5] Kalia, M. et al. (2021). Co-Generation and Segmentation for Finally, we compare our method with Hybrid-FS. Our Generalized Surgical Instrument Segmentation on Unlabelled method outperforms Hybrid-FS on all test cases. Despite the Data. MICCAI 2021. already good performance of Hybrid-FS on the test data, we [6] Sahu, M. et al. (2021). Simulation-to-real domain adaptation with teacher–student learning for endoscopic instrument still gain benefits from the student-teacher approach. segmentation. Int J CARS 16, 849–859. [7] Pfeiffer, M. et al. (2019). Generating Large Labeled Data Sets for Laparoscopic Image Processing Tasks Using Unpaired 5 Conclusions Image-to-Image Translation. MICCAI 2019. [8] Philipp M., et al. (2021). Synthetic data generation for optical flow evaluation in the neurosurgical domain. Curr. Dir. We leverage hybrid-synthetic data and a student-teacher Biomed. Eng, 7(1), 67-71
Current Directions in Biomedical Engineering – de Gruyter
Published: Jul 1, 2022
Keywords: Annotation-efficiency learning; neurosurgery; instrument localization; medical deep learning
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.