Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Segmentation-based registration of ultrasound volumes for glioma resection in image-guided neurosurgery

Segmentation-based registration of ultrasound volumes for glioma resection in image-guided... Purpose In image-guided surgery for glioma removal, neurosurgeons usually plan the resection on images acquired before surgery and use them for guidance during the subsequent intervention. However, after the surgical procedure has begun, the preplanning images become unreliable due to the brain shift phenomenon, caused by modifications of anatomical structures and imprecisions in the neuronavigation system. To obtain an updated view of the resection cavity, a solution is to collect intraoperative data, which can be additionally acquired at different stages of the procedure in order to provide a better under - standing of the resection. A spatial mapping between structures identified in subsequent acquisitions would be beneficial. We propose here a fully automated segmentation-based registration method to register ultrasound (US) volumes acquired at multiple stages of neurosurgery. Methods We chose to segment sulci and falx cerebri in US volumes, which remain visible during resection. To automati- cally segment these elements, first we trained a convolutional neural network on manually annotated structures in volumes acquired before the opening of the dura mater and then we applied it to segment corresponding structures in different surgical phases. Finally, the obtained masks are used to register US volumes acquired at multiple resection stages. Results Our method reduces the mean target registration error (mTRE) between volumes acquired before the opening of the dura mater and during resection from 3.49 mm (± 1.55 mm) to 1.36 mm (± 0.61 mm). Moreover, the mTRE between volumes acquired before opening the dura mater and at the end of the resection is reduced from 3.54 mm (± 1.75 mm) to 2.05 mm (± 1.12 mm). Conclusion The segmented structures demonstrated to be good candidates to register US volumes acquired at different neurosurgical phases. Therefore, our solution can compensate brain shift in neurosurgical procedures involving intraopera- tive US data. Keywords Ultrasound · Image registration · Image segmentation · Convolutional neural network · Image-guided surgery Introduction In brain surgery for tumor removal, neurosurgeons usu- ally plan the intervention on pre-surgical images. The most widely used modality for neurosurgery planning is mag- netic resonance imaging [1, 2, 3]. To help physicians with the resection, neuronavigation systems can be used to link * Luca Canalini luca.canalini@mevis.fraunhofer.de preplanning data positions to patient’s head locations. By tracking fiducial markers placed on the patient’s skull and Fraunhofer MEVIS, Institute for Digital Medicine, Bremen, surgical tools, an optical system computes an image-to- Germany patient transformation. Consequently, by pin-pointing an University of Bremen, Bremen, Germany intracranial location, neurosurgeons can obtain the same Department of Neurosurgery, University Hospital position in the preplanning images. However, initialization Knappschaftskrankenhaus, Bochum, Germany inaccuracies of the neuronavigation system may invalidate Surgical Planning Laboratory, Brigham and Women’s the image-to-patient transformation, affecting the quality of Hospital, Harvard Medical School, Boston, USA Vol.:(0123456789) 1 3 1698 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 these images since the beginning of the resection [4]. Addi- degraded US data with preplanning imaging, it would be tionally, after resection starts, the preplanning data become useful to register first the pre-surgical MRI data with US even more unreliable due to the brain shift phenomenon: volumes acquired before resection, in which few anatomical Structures observed in preplanning images don’t remain in modifications occurred. Afterward, intraoperative US data the same conformation and position during tumor removal acquired at the first stage of the surgery (which therefore has [4]. As a consequence, the probability that pathological ele- a higher quality) may be registered to subsequent US acqui- ments are missed increases, reducing the survival rates of the sitions, and then the preplanning data could be registered to operated patients [5, 6]. To overcome this problem, intraop- those by utilizing a two-step registration [19]. In this con- erative images can be acquired [7]: They provide an updated text, neuronavigation systems could be used to co-register view of the ongoing procedure and hence compensate the intraoperative images acquired at different surgical phases. brain shift effects. A solution is represented by intraoperative However, these devices are prone to technical inaccuracies, magnetic resonance imaging (iMRI) [8]. It is demonstrated which affect the registration procedure from the beginning to be a good option [9] since its high image quality provides of the resection [4]. Moreover, the available neuronaviga- good contrast in anatomical tissue even during the resection tion systems usually offer only a rigid registration, which [10]. However, the high costs of iMRI and the architectural is not sufficient to address anatomical changes caused by adaptations required in the operating room seem to prevent brain shift. In our work, we propose a deformable method to this modality from being deployed more widely. A valid improve the registration of US volumes acquired at different alternative is given by intraoperative ultrasound (iUS) [11, stages in brain surgery. 12, 13]. Some authors reported that for certain grades of Few solutions have been proposed to improve the US–US glioma, iUS is equal or even superior to iMRI in providing registration during tumor resection in neurosurgery. In [20], good contrast between tumor and adjacent tissues [14, 15]. the authors studied the performance of the entropy-based Moreover, US represents a lower-cost solution compared to similarity measures joint entropy (JE), mutual information MRI. In our work, we focus on intraoperative 3D ultrasound (MI) and normalized mutual information (NMI) to register used in neurosurgical procedures. ultrasound volumes. They conducted their experiments with The more the resection advances, the more the initial two volumes of an US calibration phantom and two volumes acquisition of iUS becomes unreliable due to increased of real patients, acquired before the opening the dura mater. brain shift effects. Therefore, an update of the intraopera- Different rigid transformations were applied on each volume, tive imaging may be required. In [16], the authors acquired and the target registration error (TRE) was used as evalua- US volumetric data in subsequent phases of glioblastoma tion metric. The accuracy of the registration was examined resections in 19 patients and compared the ability to dis- by comparing the induced transformation to move the origi- tinguish tumor from adjacent tissues at three different steps nal images to the deformed ones, with the transformation of the procedure. According to their observations, the 3D defined by the entropy-based registration method. In both of images acquired after opening the dura, immediately before the datasets, NMI and MI outperformed JE. In another work starting the resection (we indicate this phase as before resec- [21], the same authors developed a non-rigid registration tion), are highly accurate for delineating tumor tissue. This based on free-form deformations using B-splines and using ability reduces during resection, i.e., after that most of the normalized mutual information as a similarity measure. Two resection has been performed but with residual tumor, and datasets of patients were used, where for each case a US after resection, i.e., when all the detected residual tumor volume was acquired before the opening of the dura, and has been removed. In fact, the resection procedure itself is one after (but prior to start of tumor resection). To assess responsible for creating small air bubbles, debris and blood. the quality of the registration, the correlation coefficient was Besides this, a blood clotting inducing material commonly computed within the overlap of both volumes and before and used during neurosurgical procedures causes several image after registration. Furthermore, these authors segmented the artefacts [14, 17]. Successive studies regarding other types volumetric extension of the tumor with an interactive multi- of tumor resection confirmed the degradation of image scale watershed method and measured the overlap before and quality in US during resection [18]. Therefore, it would be after the registration. One limitation of the aforementioned helpful to combine US images acquired during and after two studies is that no experiment is conducted on volumes resection with higher-quality data obtained before resec- acquired at different stages of the surgical procedure, but tion. Such a solution may also be beneficial to improve the only before the resection actually begins. In a real scenario, registration of intraoperative data with higher-quality pre- neurosurgeons use intraoperative data to find residual tumor planning MRI images. In fact, instead of combining directly after a first resection, which is conducted after the opening of the dura mater. One of the first solutions to register US data obtained Surgicel (Ethicon, Somerville, NJ). at subsequent surgical phases utilized an intensity-based 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1699 registration method to improve the visualization of volu- volumes: For this set, they were able to reduce the mTRE metric US images acquired before and after resection [22]. from 3.25 mm to 1.54 mm. Then, they applied the same The results are computed for 16 patients with different method on the BITE dataset and reduced the initial mean grades of brain supratentorial tumor and located in various error to 1.52 mm. Moreover, they tested their approach on lobes. Half of the cases were first operations, and half were the more recent RESECT dataset [14]. By using the same re-operations. Pre-resection volumes were acquired on the method on the pre- and post-resection volumes, the mTRE dura mater, or either directly on the cortex (or tumor) or was reduced from 3.55 to 1.49 mm. on a dura repair patch. The post-resection ultrasound was Our solution proposes a segmentation-based registration used to find any residual tumor. The authors used mutual approach to register US volumes acquired at different stages information as similarity measure for a rigid registration. In of neurosurgical procedures and compensate brain shift. A the further non-rigid transformation, the correlation coef- few approaches already applied segmentation methods on ficient objective function was used. To correctly evaluate US data to then register MRI and iUS [27, 28]. Our solu- their findings, for each of the 16 cases, a neuroradiologist tion represents the first segmentation-based method aimed at chose 10 corresponding anatomic features across US vol- US–US volumes registration. Our approach includes a deep- umes. The initial mean Euclidean distance of 3.3 mm was learning-based method, which automatically segments ana- reduced to 2.7 mm with a rigid registration, and to 1.7 mm tomical structures in subsequent US acquisitions. We chose with the non-rigid registration. The quality of the alignment to segment the hyperechogenic structures of the sulci and of the pre- and post-resection ultrasound image data was falx cerebri, which remain visible during the resection and also visually assessed by a neurosurgeon. Afterward, an thus represent good corresponding elements for further reg- important contribution to neurosurgical US–US registra- istration. In the following step, parametric and nonparamet- tion came by the release of the BITE dataset [23], in which ric methods use the generated masks to register US volumes pre- and post-resection US data are publicly available with acquired at different surgical stages. Our solution reduces relative landmarks to test registration methods. One of the the initial mTRE for US volumes acquired at subsequent first studies involving BITE dataset came from [17]. The acquisitions in both RESECT and BITE datasets. authors proposed an algorithm for non-rigid REgistration of ultraSOUND images (RESOUND) that models the deforma- tion with free-form cubic B-splines. Normalized cross-corre- Materials and methods lation was chosen as similarity metric, and for optimization, a stochastic descendent method was applied on its derivative. Datasets Furthermore, they proposed a method to discard non-corre- sponding regions between the pre- and post-resection ultra- We used two different public datasets to validate our seg- sound volumes. They were able to reduce the initial mTRE mentation-based registration method. Most of our experi- from 3.7 to 1.5 mm with a registration average time of 5 s. ments are conducted on the RESECT dataset [14], including The same method has been then used in [19]. In a composi- clinical cases of low-grade gliomas (Grade II) acquired on tional method to register preoperative MRI to post-resection adult patients between 2011 and 2016 at St. Olavs University US data, they applied the RESOUND method to register Hospital, Norway. There is no selection bias, and the dataset first pre- and post-resection US images. In another solution includes tumors at various locations within the brain. For [24], the authors aimed to improve the RESOUND algo- 17 patients, B-mode US-reconstructed volumes with good rithm. They proposed a symmetric deformation field and an coverage of the resection site have been acquired. No blood efficient second-order minimization for a better convergence clotting agent, which causes well-known artefacts, is used. of the method. Moreover, outlier detection to discard non- US acquisitions are performed at three different phases of corresponding regions between volumes is proposed. The the procedure (before resection, during and after resection), BITE mean distance is reduced to 1.5 mm by this method. and different US probes have been utilized. This dataset is Recently, another method to register pre- and post-resection designed to test intra-modality registration of US volumes US volumes was proposed by [25]. The authors presented a and two sets of landmarks are provided: one to validate the landmark-based registration method for US–US registration registration of volumes acquired before, during and after in neurosurgery. Based on the results of 3D SIFT algorithm resection, and another set that increases the number of land- [26], images features were found in image pairs and then marks between volumes obtained before and during resec- used to estimate dense mapping through the images. The tion. Regarding both sets, the reference landmarks are taken authors utilized several datasets to test the validity of this in the volumes acquired before resection and then are uti- method. A private dataset of nine patients with different lized as references to select the corresponding landmarks types of tumor was acquired, in which 10 anatomical land- in US volumes acquired during and after tumor removal. marks were selected per case, in both pre- and post-resection In the RESECT dataset, landmarks have been taken in the 1 3 1700 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 1 Web-based annotation tool. While contouring the structures of US volumes. The annotation tool is accessible by common web of interest on the axial view (yellow line in the left frame), the seg- browsers, and it has been used to obtain and then review the manual mentation process can be followed in real time on the other two views annotation proximity of deep grooves and corners of sulci, convex in the US volumes acquired before resection of RESECT points of gyri and vanishing points of sulci. The number of dataset. Pathological tissue was excluded from the manual landmarks of the first and second sets can be, respectively, annotation since it is progressively removed during resection found in the second column of Tables 4 and 5. and correspondences could not be found in volumes acquired In addition to RESECT volumes, BITE dataset is also at subsequent stages. On the contrary, we focused on other utilized to test our registration framework [23]. It contains hyperechogenic (with an increased response—echo—during 14 US-reconstructed volumes of 14 different patients with ultrasound examination) elements such as the sulci and falx an average age of 52 years old. The study includes four low- cerebri. We consider these elements valid correspondences grade and ten high-grade gliomas, all supratentorial, with the because the majority of them has a high chance to remain majority in the frontal lobe (9/14). For 13 cases, acquisitions visible in different stages of the procedure. are obtained before and after tumor resection. Ten homolo- The manual segmentations were performed on a web- gous landmarks are obtained per volume, and initial mTRE based annotation tool. As shown in Fig. 1, each RESECT are provided. The quality of BITE acquisitions is lower with volume can be simultaneously visualized on three different respect to RESECT dataset, mainly because blood clotting projections planes (axial, sagittal and coronal). The segmen- agent is used, creating large artefacts [14]. tation task is accomplished by contouring each structure (yellow contour in the first frame of Fig.  1) of interest on the Methods axial view. The drawn contours are then projected onto the other two views (blue overlay in the second frames of Fig. 1) We used MeVisLab for implementing (a) an annotation tool so that a better understanding of the segmentation process is for medical images, (b) a 3D segmentation method based on possible by observing the structures in different projections. a CNN and (c) registration framework for three-dimensional The annotation process can be accomplished very easily and data. smoothly, and 3D interpolated volumes can be then obtained by rasterizing the drawn contours. As shown in Fig. 1, the Manual segmentation of anatomical structures contours are well defined in the axial view but several ele- ments are not correctly included if considering the other The first step of our method consists of the 3D segmenta- two views. This is a common issue that we found in our tion of anatomical structures in different stages of US acqui- annotation, which would require much time and effort to be sitions. Both RESECT and BITE datasets are used to test corrected. However, we decided to have a maximum annota- registration algorithms and no ground truth is provided for tion time of 2 h per volume. The obtained masks correctly validating segmentation methods. Therefore, we decided to include the major structures of interest, but some elements conduct a manual annotation of the structures of interest such as minor sulci are missing. Despite the sparseness of our dataset, we expect our training set to be good enough to train our model to segment more refined structures of interest [29, 30]. https ://www.mevis lab.de/mevis lab/. 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1701 Table 1 Rating of the manual Volumes 1 2 3 4 6 7 12 14 15 16 17 18 19 21 24 25 27 annotations Ranking 2 2 3 2 2 3 2 3 2 3 2 2 2 2 2 2 2 After the contours of the main structures of interest were manually drawn, the neurosurgeons rated them according to criterion defined in the session “Manual segmentation of anatomical structures”. The criterion is defined taking into account the sparseness of the manual annotations. A point equal to 4 is given to the annotations where many of the main structures of interest are missing. On the contrary, if minor structures of interest (i.e., minor sulci) are missing but the major ones are correctly included, the best point of 1 is given The manual annotation was performed by the main 1 to 15, the validation one the volumes from 16 to 21 and author of this work (L.C.), who has two years of experi- the test one the volumes 24, 25, 27. ence in medical imaging and almost one year in US imag- After having found the best model to segment anatomi- ing for neurosurgery. Then, a neurosurgeon with many years cal structures in pre-resection US volumes, we applied it to of experience in the use of US modality for tumor resec- segment ultrasound volumes acquired at different surgical tion reviewed and rated the manual annotations, by taking phases. into account the sparseness of the dataset. According to the defined criteria, each volume could be rated with a point Registration between 4 and 1. More precisely, a point equal to 1 means that the main structures (falx cerebri and major sulci) are The masks automatically segmented by our trained model correctly segmented, and only minor changes should be done are used to register US volumes. The proposed method is to exclude parts of no interest (i.e., slightly over-segmented a variational image registration approach based on [31]: elements). A point equal to 2 indicates that the main struc- The registration process can be seen as an iterative opti- tures are correctly segmented, but major corrections should mization algorithm where the search of the correct regis- be done to exclude structures of no interest. A point equal to tration between two images corresponds to an optimization 3 indicates that main structures were missed in the manual process aimed at finding a global minimum of an objec - annotations, which, however, are still acceptable. A score of tive function. The minimization of the objective function is 4 means that a lot of major structures are missing; therefore, performed according to the discretize-then-optimize para- that annotation for the volume of interest cannot be accepted. digm [31]: The discretization of the various parameters is The neurosurgeon evaluated the annotations by looking at followed by their optimization. The objective function to be the projected structures on the sagittal and coronal views of minimized is composed of a distance measure, which quan- the drawn contours. Table 1 shows the results of the rating tifies the similarity between the deformed template image process for the volumes of interest. and the reference one, and a regularizer, which penalizes undesired transformations. In our approach, the binary 3D Segmentation masks generated by the previous step are used as input for the registration task, which can be seen as mono-modality A convolutional neural network aimed for a volumetric seg- intensity-based problem. Therefore, we chose the sum of mentation is trained on the manual annotations. We utilized squared differences (SSD) as a similarity measure, which is the original 3D U-net [29] architecture, in which few modi- usually suggested to register images with similar intensity fications were made with respect to the original implementa- values. Moreover, to limit the possible transformations in the tion: (a) The analysis and synthesis paths have two resolution deformable step, we utilized the elastic regularizer, which steps and (b) before each convolution layer of the upscaling is one of the most commonly used [31]. In our method, the path a dropout with a value of 0.4 is used in order to prevent choice of the optimal transformation parameters has been the network from overfitting. The training is conducted with conducted by using the quasi-Newton l-BGFS [32], due a patch size of (30,30,30), padding of (8,8,8) and a batch size to its speed and memory efficiency. The stopping criteria of 15 samples. The learning rate was set to 0.001, and the for the optimization process were empirically defined: the best model saved according to best Jaccard index computed minimal progress, the minimal gradient and the relative one, on 75 samples every 100 iterations. The architecture modi- the minimum step length were set equal to 0.001, and the fications, as well as the training parameters, were chosen by maximum number of iterations equal to 100. conducting several experiments and selecting those provid- Our registration method aims to provide a deformable ing the best results. As training, validation and test sets, we solution to compensate for anatomical changes happening split the seventeen volumes acquired before resection, which during tumor resection. As commonly suggested for meth- we annotated in the manual annotation. The split has been ods involving non-rigid registration tasks [31], the proposed done as follows: The training set includes the volumes from solution includes an initial parametric registration used then 1 3 1702 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 to initialize the nonparametric one. First of all, the para- assessment of the generated masks is performed. Moreover, metric approach utilizes the information provided by the the over-segmented elements are expected to have a mean optical tracking systems as an initial guess. Based on this intensity value as close as possible to the one of the manu- pre-registration, a two-step approach is conducted, includ- ally annotated structures. To verify this, we compared the ing a translation followed then by a rigid transformation. In mean intensity values of the manual annotations and the this stage, to speed the optimization process, the images are automatically generated masks. registered at a resolution one-level coarser compared to the Regarding US volumes acquired during and after resec- original one. Then, the information computed during the tion, no manual annotation was obtained, so no DICE index parametric registration is utilized as the initial condition for could be computed. Therefore, to be sure that structures of the nonparametric step. In this stage, to reduce the chance to interest are correctly segmented, we show that the masks reach a local minimum, a multilevel technique is introduced: of the three stages of US data segmented by our trained the images are registered at three different scales, from a model (a) are strongly correlated in terms of volume exten- third-level to one-level coarser. As output of the registration sion by computing the Pearson correlation coefficient and step, the deformed template image is provided. (b) include structures with a mean intensity value similar to the manual annotations. Secondly, we conduct a visual inspection of the results, which is helpful to verify whether Evaluation or not corresponding anatomical structures are segmented in these stages. Segmentation Given the fact that our annotations are not publicly avail- able, only a qualitative comparison is made with respect to In Table 1, we can see that no annotation received the best other methods which also proposed a US segmentation solu- score of 1, but all of them have some imperfections. How- tion in the context of neurosurgery [27, 28, 30, 33]. ever, none of the manually annotated masks was scored with 4. Consequently, we can consider our annotations as a sparse Registration ground truth in which only the main hyperechogenic struc- tures of interest are included. Regarding this, CNNs trained The transformations and deformation fields computed in on a sparse dataset already proved to be able to segment the parametric and nonparametric step are then applied to more refined and numerous structures respect to the sparse the landmarks contained in the datasets. The TRE values training set [29, 30]. Therefore, we expect our annotations before and after registration are provided per each patient, to be good enough to train the CNN model in order to gen- with the measure of the closest and farthest couple of points, erate meaningful structures for guiding the further registra- and mean and standard deviation values are also given per tion step. In fact, the registration step will give an important each set of landmarks. A visual inspection of the registration feedback about the quality of the generated masks: For our results is also shown, in which the initial registration based purposes, the segmented structures are meaningful if they on the information of the optical tracker can be compared correctly guide the registration method. In addition to this, with the results obtained by our method. Moreover, a com- an analysis of the segmentation results will be provided, as parison with previous solutions is provided. Regarding this, described in the following section. some methods have been proposed to register BITE volumes Regarding US volumes acquired before resection, no [17, 19, 24, 25], but none of them except one [25] provided a ground truth is available for the structures not contained generalized solution able to register volumes of both datasets in the manual annotations. Consequently, the DICE coef- (BITE and RESECT). On the contrary, our method provides ficients are computed by including only the automatically an approach valid for both two datasets. For the RESECT segmented elements with correspondences to manual dataset, the authors of [25] proposed a solution only for vol- annotations and by discarding elements having no counter- umes acquired before and after resection. Our approach is part in manually annotated data. This measure is useful to the first one to be applied to the volumes acquired before and verify whether the main structures of interest are correctly during resection of RESECT dataset; therefore, no compari- segmented by the trained model. As further information, son is available for this specific set. we also provide the DICE coefficients computed without The capture range of our method is also computed. We excluding any structure. These values would be useful for define the capture range as the largest initial misalignment a deeper analysis of our algorithm but, as aforementioned, within which our algorithm still converges to a solution for they may not be so indicative for our purposes due to the 80% of the cases. To evaluate it, we started the registra- sparseness of our dataset. Furthermore, the automatically tion from multiple starting misalignments and we checked generated masks should also include more refined elements whether or not the method converged to a solution. Then, we than the original ground truth. To verify this, a first visual 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1703 Fig. 2 Segmentation and landmarks. Original intensity volumes points of gyri, and vanishing points of sulci. We chose to segment where the generated masks (in green) and RESECT landmarks (pur- sulci falx cerebri, and therefore, we can see how the landmarks are ple squares) are overlaid. In RESECT dataset, landmarks have been closely located to the segmented structures taken in proximity of deep grooves and corners of sulci, convex Table 2 DICE coefficients Volumes 1 2 3 4 6 7 12 14 15 16 17 18 19 21 24 25 27 for volumes acquired before surgery (a)  Dice % 68 62 57 76 71 56 78 76 78 61 62 70 70 63 74 68 69 (b)  Dice % 62 46 28 59 50 46 67 67 63 53 45 35 61 42 58 44 51 (a) Refers to the DICE coefficient computed by considering only the structures with a counterpart in the manual annotations. The method shows evidence of being able to properly segment the anatomical struc- tures considered in the manual annotations. (b) Refers to the DICE values for the whole set of the automati- cally segmented structures computed the value of the capture range by using as distance Regarding US data acquired before resection, Table 2a measure the mTRE computed on the available landmarks. provides the DICE coefficients computed between the manually segmented structures and the corresponding masks generated by our trained model. In Table  2b, the DICE coefficients for the whole set of generated masks Results (without excluding the elements not included in the man- ual annotation) are given. Furthermore, the first and third Segmentation bars in Fig. 3 show that the structures automatically seg- mented in pre-resection volumes have a mean intensity Figure 2 shows an example of a segmented structure in a value very similar to those chosen in the manual annota- volume acquired before resection. It can be seen that the tions. A similar consideration can be made for the ele- generated masks cover the locations where landmarks were ments considered as background (second and fourth bars acquired. In fact, we decided to segment sulci and falx cer- in Fig. 3). Qualitative results also confirm this evidence. ebri, which are the anatomical elements taken into account Figure  4 shows four examples of automatically gener- to acquire the majority of the landmarks in the RESECT ated masks in comparison with the corresponding manual dataset. annotations. In most of the cases, our method correctly 1 3 1704 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 3 Intensity values of the masked ultrasound volumes. This graph masked volumes have in all the cases a similar mean value, higher presents the mean intensity values of the masked ultrasound vol- than the excluded areas. This is meaningful since our elements of umes (first, third, fifth, seventh bars) acquired at the three stages, and interest are the bright (hyperechogenic) structures in the US. On the the mean intensity values of the area excluded by the segmentation contrary, the even numbered bars have a similar mean intensity value, (second, fourth, sixth, eighth bars). For the volumes acquired before lower than the chosen structures. We are not interested in hypoecho- resection, volumes masked with manual annotation and elements genic structures, with look darker in the US acquisitions segmented by the neural network are compared (first four bars). The segments refined elements which were not included in the Registration manual annotation due to timing restriction see “Manual segmentation of anatomical structures”. Violet squares The mean time required by the registration tasks is given in highlight some examples of these structures. Though, in Table 3, together with the mean time required by each vol- several cases, the neural network wrongly segments patho- ume to be segmented by the trained model. All experiments logical tissue which we excluded from the manual annota- are made on a computer equipped with an Intel Core i7 and tions (see blue squares in Fig. 4d). a GeForce GTX 1080 (8 GB). For the volumes acquired during and after resec- By relying on the automatically generated masks in the tion, a strong correlation exists between the extension segmentation step, we registered the US volumes acquired at of their masks segmented by the neural network and of different surgical stages. First, the volumes acquired before the volumes before resection. In fact, the Pearson coef- and during resection are registered. Then, our algorithm is ficient between the masks of US data acquired before applied to volumes acquired during and after resection. The and during resection has a value of 0.90, and a value of computed deformation fields are applied to the landmarks 0.91 for those of pre- and post-removal. As for the US provided in the RESECT dataset, and the results after reg- data acquired before resection, Fig. 3 shows that the ana- istration are shown in Table 4 (for volumes acquired before tomical structures segmented at the different stages have and during resection) and in Table  5 (volumes acquired a mean intensity similar to the manual annotation (last before and after resection). Regarding the results in Table 5, four bars). Therefore, we can state that our segmentation the registration of the landmarks is performed by concat- method, applied to volumes acquired at different stages, enating two different transformations: the one computed segment structures related to each other in terms of vol- before–during US volumes together with the one for vol- umes extension and mean intensity values. Then, visual umes acquired during and after resection (see Fig. 7 for a results in Figs. 5 and 6 confirm the evidence of the quan- more detailed description). titative results, showing that our model trained on a stage As it can be seen in both tables, both parametric and of US correctly segments analogous elements in volumes nonparametric methods reduce the initial mean registra- acquired at different stages. However, qualitative results tion errors provided in the RESECT dataset. In Table  4, in Fig. 6 also show that our method often detects resection it can be noticed that the proposed methodology improves cavities, which have no corresponding structures in the the initial mTRE more than 2 mm, by decreasing the mean pre-resection volumes. errors for each patient. For the second registration tasks, our 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1705 Fig. 4 Segmentation of ultrasound volumes acquired before resec- displayed. In each example, a pointer (intersection of yellow cross- tion. In each example, the axial, sagittal and coronal views are shown ing lines) highlights the same volume position in the three views. Our in the first, second and third row, respectively. In the first column, method correctly segments the main structures. Moreover, structures the original ultrasound volume is exhibited, in the second column, wrongly not included in the manual annotations are correctly detected the manual annotation performed on the axial view and projected in by the trained neural network (purple squares). However, in image the other two views is shown, in the third column, the segmentation d, pathological tissue correctly excluded in the original masks is result obtained by the 3D U-net for the same volume of interest is wrongly segmented by our method (blue squares in axial view) method reduces the mean registration error by nearly 1.5 mm in Table 6, with a comparison to previously proposed solu- (Table 5). Visual examples provided in Figs. 8 and 9 also tions (last section of Table 6). As it can be seen, also for this confirm the numerical results. The images show the fixed dataset the initial mTRE is reduced by both parametric and volumes with the related segmentation (in red), together nonparametric registration approaches. with the mask of the moving volumes (in green). By com- The value of the capture range of our method is equal to paring the overlay before and after registration, we highlight 6.25 mm. the registration improvements by coloring in yellow the cor- rect overlay of the two masks. Regarding the results on the RESECT dataset, only those obtained for volumes acquired Discussion before and after resection can be compared with another solution [25] (see Table 5). The manual annotations, even if sparse, are good enough to Our segmentation-based registration method is then train the CNN model to segment the anatomical structures applied on BITE dataset, directly registering volumes of interest, as shown by the DICE coefficients in Table  2. acquired before and after resection. The results are available Moreover, Fig.  4 shows that automatically generated 1 3 1706 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 5 Segmentation of ultrasound volumes acquired during resec- containing one intensity volume together with the generated mask. It tion. After having trained the neural network on the stage before appears clear how the main hyperechogenic structures are correctly resection, we applied it to ultrasound volumes acquired during resec- included in the segmentation tion. This figure shows four examples of segmentation results, each segmentations are more precise than the manual annota- consider to separately segment pathological tissue and then tions, with a better contours refinement and larger number exclude it during registration. A similar consideration can be of identified structures. However, some pathological tissues made for the resection cavities in volumes acquired during are wrongly segmented by our method (see Fig. 4d). This and after resection, which appear as bright as sulci and are may be due to the fact that in US data the glioma of grade wrongly segmented by the proposed method (Fig. 6). Fur- II appears as hyperechogenic structures, with an intensity thermore, from a qualitative comparison with other segmen- similar to the elements of interest. In future work, we could tation methods involving US data, we can highlight some 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1707 Fig. 6 Segmentation of ultrasound volumes acquired after resection. how the main hyperechogenic structures are correctly included in After having trained the neural network on the stage before resection, the segmentation. In the last two examples (second row), we see how we applied it to ultrasound volumes acquired after resection. This fig- resection cavities (appearing hyperechogenic on US) are segmented ure shows four examples of segmentation results, each containing one by the 3D U-net, even they have no counterparts in the pre-resection intensity volume together with the generated mask. It appears clear stage advances of our approach. First of all, with respect to [27, The second important contribution of this work is the reg- 33], a higher number of anatomical structures are included istration of US volumes acquired at die ff rent surgical stages. in our manual annotations. Therefore, the potential range First of all, the segmentation method gives evidence of being of clinical scenarios in which our method could be applied able to generate meaningful masks to guide the registra- might be wider. Secondly, a trained neurosurgeon has clini- tion task. In fact, the proposed registration method is able to cally validated the manual annotations (Table 1). This is not reduce the mTREs of three sets of volumes from two differ - the case for other segmentation-based methods [30, 28], in ent datasets (Table 4, 5, 6) by using the corresponding ana- which no precise rating of the manual masks is provided. tomical structures previously segmented. From numerical 1 3 1708 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Table 3 Mean time in seconds per task resection of RESECT dataset (Table 5). The mTRE obtained by the aforementioned approach is better than our method, Mean time (in s) per each task which, however, is the first one to provide results for the vol- Segmentation (infer- Total registration Total registration time umes obtained before and during resection of the RESECT ence) time (US before–dur- (US during–after) dataset. In this set of volumes, our registration performs ing) quite well, reducing the initial mTRE to 1.36 mm. 1.28 28.55 29.40 Regarding the BITE dataset, our algorithm improves the initial registration (see Table 6), proving not to be over-tuned With segmentation, we indicate the inference process in which the on RESECT dataset. Note that in contrast to our approach, trained model generates the mask of a volume given in input. The other two values are related to the registration tasks, including the all other methods compared in Table 6 have only been tested time of both the parametric and nonparametric approaches on the BITE dataset. Thus, the results may be over-tuned on this limited set of volumes and the approaches could lack and visual results, we can notice that even if minor corre- generalization. On the contrary, our solution is the second sponding segmented elements are missing in volume pairs, one after [25] to propose a more generalized method, which our method is able to reduce the initial registration errors. has been tested on registering the volumes of both RESECT However, in the case of volumes acquired after removal, and BITE datasets. Therefore, our method is validated on a resection cavities may be segmented by our method due to larger number of US acquisitions, providing a more gener- their intensity similar to the sulci. Consequently, the mTRE alized solution. Nevertheless, there might be some reasons in Table  5 is reduced less with respect to Table  4, since why a few other approaches have smaller average mTREs these structures have no or few corresponding elements in for the BITE dataset (last section of Table 6). First of all, a volumes acquired in previous steps. This is a limiting factor numerical impacting factor for our results comes from case of our registration method, which is completely based on 12, where the TRE increases from 10.54 up to 11.08 mm, the masks generated by our trained model. In future work, affecting the overall result. The capture range of our method we could try to segment such structures and exclude them is too low to register this volumes pair, which has a very during the registration. Only another work [25] focused on large initial misalignment. In future work, we could improve the registration of US volumes acquired before and after the results by performing an initial registration which could Table 4 Registration errors on Mean distance (range) in mm before versus during (RESECT dataset) RESECT dataset Patient Number of Mean initial distance Mean distance after para- Mean distance after landmarks metric registration nonparametric registra- tion 1 34 2.32 (1.49–3.29) 0.93 (0.31–1.73) 0.89 (0.22–1.57) 2 16 3.10 (1.79–5.19) 1.54 (0.37–3.58) 1.69 (0.71–4.19) 3 17 1.93 (0.67–3.02) 1.20 (0.36–2.47) 1.14 (0.24–2.45) 4 19 4.00 (3.03–5.22) 0.89 (0.31–1.86) 0.83 (0.24–1.65) 6 21 5.19 (2.60–7.18) 1.95 (0.63–3.71) 1.80 (0.58–3.61) 7 22 4.69 (0.94–8.16) 2.50 (1.24–5.78) 2.39 (1.15–5.86) 12 24 3.39 (1.74–4.81) 1.57 (0.43–3.20) 1.58 (0.44–3.36) 14 22 0.71 (0.42–1.59) 0.52 (0.09–1.15) 0.52 (0.12–0.93) 15 21 2.04 (0.85–2.84) 0.80 (0.28–1.44) 0.73 (0.18–1.31) 16 19 3.19 (1.22–4.53) 1.52 (0.95–2.21) 1.40 (0.75–2.43) 17 17 6.32 (4.65–8.07) 2.93 (1.67–4.46) 2.51 (1.14–4.03) 18 23 5.06 (1.55–7.44) 1.75 (0.70–3.04) 1.29 (0.46–2.81) 19 21 2.06 (0.42–3.40) 1.93 (0.20–3.19) 1.33 (0.48–2.67) 21 18 5.10 (3.37–5.94) 1.27 (0.19–3.53) 1.22 (0.19–3.46) 24 21 1.76 (1.16–2.65) 0.89 (0.18–2.17) 0.81 (0.08–2.07) 25 20 3.60 (2.19–5.02) 3.56 (2.09–5.14) 2.27 (1.04–3.92) 27 16 4.93 (3.61–7.01) 0.77 (0.24–1.35) 0.71 (0.19–1.26) Mean value Mean value Mean value 3.49 ± 1.55 1.56 ± 0.82 1.36 ± 0.61 Mean registration errors between ultrasound volumes acquired before and during resection. Original dis- tances are compared to the results obtained with our segmentation-based registration 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1709 Table 5 Registration errors on RESECT dataset Mean distance (range) in mm before vs. after (RESECT Dataset) Patient Number of landmarks Mean initial distance Mean distance after parametric Mean distance after registration nonparametric registra- tion 1 13 5.80 (3.62–7.22) 2.69 (0.93–4.08) 2.67 (0.75–4.18) 2 10 3.65 (1.71–6.72) 2.32 (0.90–4.25) 2.18 (0.55–3.93) 3 11 2.91 (1.53–4.30) 1.63 (0.82–2.48) 1.53 (0.82–2.25) 4 12 2.22 (1.25–2.94) 1.05 (0.46–1.95) 1.06 (0.30–2.05) 6 11 2.12 (0.75–3.82) 1.91 (0.47–3.06) 1.88 (0.24–2.93) 7 18 3.62 (1.19–5.93) 2.29 (0.92–4.13) 2.08 (0.70–3.93) 12 11 3.97 (2.58–6.35) 1.60 (0.54–4.73) 1.44 (0.61–4.51) 14 17 0.63 (0.17–1.76) 0.63 (0.11–1.84) 0.57 (0.09–1.52) 15 15 1.63 (0.62–2.69) 0.85 (0.12–2.13) 0.88 (0.23–2.38) 16 17 3.13 (0.82–5.41) 2.40 (0.61–4.70) 2.14 (0.79–4.35) 17 11 5.71 (4.25–8.03) 3.82 (2.36–6.68) 3.40 (1.91–6.28) 18 13 5.29 (2.94–9.26) 2.19 (1.14–4.32) 1.51 (0.65–2.92) 19 13 2.05 (0.43–3.24) 4.00 (1.42–14.27) 3.97 (0.91–15.29) 21 9 3.35 (2.34–5.64) 1.23 (0.29–3.20) 1.18 (0.28–3.16) 24 14 2.61 (1.96–3.41) 0.86 (0.18–2.26) 0.79 (0.13–2.02) 25 12 7.61 (6.40–10.25) 5.75 (4.39–8.34) 3.88 (2.74–6.07) 27 12 3.98 (3.09–4.82) 3.77 (2.22–5.10) 3.76 (2.24–5.30) Mean value Mean value Mean value 3.54 ± 1.75 2.29 ± 1.37 2.05 ± 1.12 Other methods mTRE after registration [25] 1.49 mm Mean registration errors between ultrasound volumes acquired before and after resection. Original distances are compared to the results obtained with our segmentation-based registration. Moreover, a comparison is made with a previous method proposed to solve this task Fig. 7 Registration of different US volume pairs. Instead of register - from before to after resection volumes is obtained by concatenating ing directly pre-resection US data with those after resection (continu- two different registrations results (US before resection to US during ous line), a two-step method (dotted arrows) is proposed by including resection + US during resection to US after resection) the US volumes acquired during resection. The final transformation increase the capture range of our method. Moreover, the The total time required by each task of our method is limited improvement obtained by our method might be due visible in Table 3: The segmentation step requires 1.28 s to the lower quality of the BITE dataset with respect to the and 28.55 s (before/during) and 29.40 s (during/after) that RESECT volumes, which is used for training the segmenta- are needed to register the generated 3D masks. In addi- tion approach. Since our registration method is based on the tion to this, we should also take into account the time to generated masks, it is almost impossible for the registration reconstruct a 3D US volumes from 2D images, which is of method to converge to the right solution if the segmented a few seconds [14]. Considering the increase in the brain masks are not accurate enough. shift over the time and the average duration of a neuro- surgical procedure [34], our algorithm is fast enough to 1 3 1710 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 8 Registration results for before and during resection volumes. mentation overlay according to the original information. The second The images show four examples of registration by combining fixed column displays the overlay of the segmented structures after regis- volumes (during resection) with its segmented structures (in red) and tration. By highlighting in yellow the correct overlap of segmented the segmented elements of moving volumes acquired before resec- structures, we can see how the structures are more aligned after the tion (in green). In the first column of each example, we show the seg- performed registration register US volumes and therefore provides a meaningful Conclusion solution for brain shift. Nevertheless, in future work we could optimize our algorithm in order to speed up the To the best of our knowledge, our solution is the first registration step. one to propose a segmentation-based registration method 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1711 Fig. 9 Registration results for before and after resection volumes. mentation overlay according to the original information. The second These images show four examples of registration by combining fixed column displays the overlay of the segmented structures after regis- volumes (after resection) with its segmented structures (in red) and tration. By highlighting in yellow the correct overlap of segmented the segmented elements of moving volumes acquired before resection structures, we can see how the structures are more aligned after the (in green). In the first columns of each example, we show the seg- performed registration which registers US volumes acquired at different surgical anatomical structures prove to be meaningful elements stages. Our approach provides some important contribu- which can guide the registration of US volumes acquired tions. Regarding the segmentation step, a model based on in the neurosurgical context. In fact, for two different data - a 3D U-Net has been trained on a large number of anatomi- sets of US volumes acquired at different surgical stages, cal structures, whose manual annotations have been vali- the initial mTREs are correctly reduced, demonstrating dated by an experienced neurosurgeon. Even if the training that our solution is not over-tuned for a specific dataset. is performed on a sparse set of annotations, the proposed Moreover, our work is the first one to be applied also on solution is able to automatically segment hyperecho- the US volumes of RESECT dataset acquired during resec- genic elements in US volumes. Moreover, the segmented tion, for which no previous work has been published. 1 3 1712 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Table 6 Registration errors on BITE dataset Mean distance (range) in mm before vs. after (BITE dataset) Patient Mean initial distance Mean distance after parametric registration Mean distance after nonparametric registra- tion 2 2.30 (0.57–5.42) 1.97 (0.69–4.84) 1.70 (0.51–4.70) 3 3.40 (0.0–5.09) 3.12 (0.43–4.73) 1.45 (0.25–3.48) 4 4.60 (2.96–5.88) 3.62 (2.43–4.81) 2.20 (1.02–3.62) 5 4.11 (2.58–5.52) 3.68 (2.39–5.04) 1.32 (0.46–2.96) 6 2.26 (1.36–3.10) 1.33 (0.56–2.00) 1.17 (0.30–1.64) 7 3.87 (2.60–5.07) 1.27 (0.83–2.36) 1.17 (0.75–1.75) 8 2.51 (0.67–3.93) 2.11 (0.54–3.35) 2.18 (1.05–3.43) 9 2.21 (1.00–4.59) 1.99 (0.64–4.52) 1.95 (0.58–4.54) 10 3.86 (0.98–6.68) 3.81 (1.91–6.22) 3.43 (1.53–5.69) 11 2.74 (0.44–8.22) 2.74 (0.51–7.55) 2.39 (0.35–7.42) 12 10.54 (7.85–13.04) 10.88 (8.28–13.34) 11.08 (8.36–13.72) 13 1.62 (1.33–2.21) 0.73 (0.39–1.43) 0.75 (0.30–1.67) 14 2.19 (0.59–3.99) 1.60 (0.60–2.81) 1.43 (0.41–2.24) Mean value Mean value Mean value 3.55 ± 2.28 2.98 ± 1.80 2.48 ± 2,67 Other methods mTRE after registration [17] 1.50 mm [19] 1.50 mm [24] 1.50 mm [25] 1.54 mm Mean registration errors between ultrasound volumes acquired before and after resection. In this dataset, a fixed number of markers (10 per case) is provided. Original distances are compared to the results obtained with our segmentation-based registration. Moreover, a comparison with pre- viously proposed solutions (with corresponding mTREs) is provided Acknowledgements This work was funded by the H2020 Marie-Curie 3D-navigated ultrasonography in the detection and resection con- ITN TRABIT (765148) project. trol of lesions. Neurosurg Focus 15,10(2):E3 2. Petridis AK, Anokhin M, Vavruska J, Mahvash M, Scholz M (2015) The value of intraoperative sonography in low grade gli- Compliance with ethical standards oma surgery. Clin Neurol Neurosurg 131:64–68 3. Bucholz RD, Smith KR, Laycock KA, McDurmont LL (2001) Ethical approval Two publicly available datasets are used. This article Three-dimensional localization: from image-guided surgery to does not contain any studies with human participants performed by information-guided therapy. Methods 25(2):186–200 any of the authors. 4. Gerard IJ, Kersten-Oertel M, Petrecca K, Sirhan D, Hall JA, Col- lins DL (2017) Brain shift in neuronavigation of brain tumors: a review. Med Image Anal 35:403–420 Conflict of interest The authors declare that they have no conflict of 5. Marko NF, Weil RJ, Schroeder JL, Lang FF, Suki D, Sawaya RE interest. (2014) Extent of resection of glioblastoma revisited: personalized survival modeling facilitates more accurate survival prediction Open Access This article is distributed under the terms of the Crea- and supports a maximum-safe-resection approach to surgery. J tive Commons Attribution 4.0 International License (http://creat iveco Clin Oncol 32(8):774–782 mmons.or g/licenses/b y/4.0/), which permits unrestricted use, distribu- 6. Brown TJ, Brennan MC, Li M, Church EW, Brandmeir NJ, Rak- tion, and reproduction in any medium, provided you give appropriate szawski KL, Patel AS, Rizk EB, Suki D, Sawaya R, Glantz M credit to the original author(s) and the source, provide a link to the (2016) Association of the extent of resection with survival in glio- Creative Commons license, and indicate if changes were made. blastoma: a systematic review and meta-analysis. JAMA Oncol 2(11):1460–1469 7. Almeida JP, Chaichana KL, Rincon-Torroella J, Quinones-Hino- josa A (2015) The value of extent of resection of glioblastomas: References clinical evidence and current approach. Curr Neurol Neurosci Rep 15(2):517 1. Tronnier VM, Bonsanto MM, Staubert A, Knauth M, Kunze S, 8. Kubben PL, ter Meulen KJ, Schijns OE, ter Laak-Poort MP, van Wirtz CR (2001) Comparison of intraoperative MR imaging and Overbeeke JJ, van Santbrink H (2011) Intraoperative MRI-guided 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1713 resection of glioblastoma multiforme: a systematic review. Lancet postresection 3-dimensional ultrasound for improved visualiza- Oncol 12(11):1062–1070 tion of residual brain tumor. Ultrasound Med Biol 39(1):16–29 9. Nimsky C, Ganslandt O, Von Keller B, Romstöck J, Fahlbusch R 23. Mercier L, Del Maestro RF, Petrecca K, Araujo D, Haegelen C, (2004) Intraoperative high-field-strength MR imaging: implemen- Collins DL (2012) Online database of clinical MR and ultrasound tation and experience in 200 patients. Radiology 233(1):67–78 images of brain tumors. Med Phys 39(6):3253–3261 10. Mittal S, Black PM (2006) Intraoperative magnetic resonance 24. Zhou Hang, Rivaz H (2016) Registration of pre- and postresection imaging in neurosurgery: the Brigham concept. Acta Neurochir ultrasound volumes with noncorresponding regions in neurosur- Suppl 98:77–86 gery. IEEE J Biomed Health Inform 20:1240–1249 11. Unsgaard G, Rygh OM, Selbekk T, Müller TB, Kolstad F, Lind- 25. Machado I, Toews M, Luo J, Unadkat P, Essayed W, George E, seth F, Hernes TA (2006) Intra-operative 3D ultrasound in neu- Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken rosurgery. Acta Neurochir 148(3):235–253 S, Golby A, Wells W 3rd (2018) Non-rigid registration of 3D 12. Unsgaard G, Gronningsaeter A, Ommedal S, Nagelhus Hernes ultrasound for neurosurgery using automatic feature detection and TA (2002) Brain operations guided by real-time two-dimensional matching. Int J Comput Assist Radiol Surg 13(10):1525–1538 ultrasound: new possibilities as a result of improved image quality. 26. Toews M, Wells WM III (2013) Efficient and robust model-to- Neurosurgery 51(2):402–411 image registration using 3D scale-invariant features. Med Image 13. Unsgaard G, Ommedal S, Muller T, Gronningsaeter A, Nagelhus Anal 17(3):271–282 Hernes TA (2002) Neuronavigation by intraoperative three-dimen- 27. Nitsch J, Klein J, Dammann P, Wrede K, Gembruch O, Moltz sional ultrasound: initial experience during brain tumor resection. JH, Meine H, Sure U, Kikinis R, Millerce D (2019) Automatic Neurosurgery 50(4):804–812 and ec ffi ient MRI-US segmentations for improving intraoperative 14. Xiao Y, Fortin M, Unsgård G, Rivaz H, Reinertsen I (2017) REt- image fusion in image-guided neurosurgery. NeuroImage: Clin roSpective Evaluation of Cerebral Tumors (RESECT): a clinical 22:101766 database of pre-operative MRI and intra-operative ultrasound in 28. Rackerseder J, Baust M, Göbl R, Navab N, Hennersperger C low-grade glioma surgeries. Med Phys 44(7):3875–3882 (2018) Initialize globally before acting locally: enabling land- 15. LeRoux PD, Winter TC, Berger MS, Mack LA, Wang K, Elliott JP mark-free 3D US to MRI registration. In: MICCAI (1994) A comparison between preoperative magnetic resonance 29. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O and intraoperative ultrasound tumor volumes and margins. J Clin (2016) 3D U-Net: learning dense volumetric segmentation from Ultrasound 22(1):29–36 sparse annotation. MICCAI 16. Rygh OM, Selbekk T, Torp SH, Lydersen S, Hernes TA, Unsgaard 30. Göbl R, Rackerseder J, Navab N, Hennersperger C (2019) Fully G (2008) Comparison of navigated 3D ultrasound findings with automatic segmentation of 3D brain ultrasound: learning from histopathology in subsequent phases of glioblastoma resection. coarse annotations. arXiv preprint arXiv1904.08655 Acta Neurochir 150:1033–1042 31. Modersitzki J (2009) Flexible algorithms for image registration. 17. Rivaz H, Collins DL (2015) Near real-time robust nonrigid regis- SIAM, Philadelphia tration of volumetric ultrasound images for neurosurgery. Ultra- 32. Liu DC, Nocedal J (1989) On the limited memory BFGS method sound Med Biol 41(2):574–587 for large scale optimization. Math Program 45:503–528 18. Selbekk T, Jakola AS, Solheim O, Johansen TF, Lindseth F, Rein- 33. Nitsch J, Klein J, H. Moltz J, Miller D, Sure U, Kikinis R, Meine ertsen I, Unsgård G (2013) Ultrasound imaging in neurosurgery: H (2019) Neural-network-based automatic segmentation of cere- approaches to minimize surgically induced image artefacts for bral ultrasound images for improving image-guided neurosurgery. improved resection control. Acta Neurochir 155:973–980 In: SPIE medical imaging 19. Rivaz H, Collins DL (2015) Deformable registration of preop- 34. Trantakis C, Tittgemeyer M, Schneider JP, Lindner D, Winkler D, erative MR, pre-resection ultrasound, and post-resection ultra- Strauss G, Meixensberger J (2003) Investigation of time-depend- sound images of neurosurgery. Int J Comput Assist Radiol Surg ency of intracranial brain shift and its relation to the extent of 10(7):1017–1028 tumor removal using intra-operative MRI. Neurol Res 25:9–12 20. Letteboer MML, Viergever MA, Niessen WJ (2003) Rigid registration of 3D ultrasound data of brain tumours. Elsevier, Publisher’s Note Springer Nature remains neutral with regard to Amsterdam jurisdictional claims in published maps and institutional affiliations. 21. Letteboer MMJ, Willems PWA, Viergever MA, Niessen WJ (2013) Non-rigid registration of 3D ultrasound images of brain tumours acquired during neurosurgery. In: MICCAI 22. Mercier L, Araujo D, Haegelen C, Del Maestro RF, Petrecca K, Collins DL (2013) Registering pre- and 1 3 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Computer Assisted Radiology and Surgery Springer Journals

Segmentation-based registration of ultrasound volumes for glioma resection in image-guided neurosurgery

Loading next page...
 
/lp/springer-journals/segmentation-based-registration-of-ultrasound-volumes-for-glioma-VPf0MBFlKz

References (35)

Publisher
Springer Journals
Copyright
Copyright © 2019 by The Author(s)
Subject
Medicine & Public Health; Imaging / Radiology; Surgery; Health Informatics; Computer Imaging, Vision, Pattern Recognition and Graphics; Computer Science, general
ISSN
1861-6410
eISSN
1861-6429
DOI
10.1007/s11548-019-02045-6
Publisher site
See Article on Publisher Site

Abstract

Purpose In image-guided surgery for glioma removal, neurosurgeons usually plan the resection on images acquired before surgery and use them for guidance during the subsequent intervention. However, after the surgical procedure has begun, the preplanning images become unreliable due to the brain shift phenomenon, caused by modifications of anatomical structures and imprecisions in the neuronavigation system. To obtain an updated view of the resection cavity, a solution is to collect intraoperative data, which can be additionally acquired at different stages of the procedure in order to provide a better under - standing of the resection. A spatial mapping between structures identified in subsequent acquisitions would be beneficial. We propose here a fully automated segmentation-based registration method to register ultrasound (US) volumes acquired at multiple stages of neurosurgery. Methods We chose to segment sulci and falx cerebri in US volumes, which remain visible during resection. To automati- cally segment these elements, first we trained a convolutional neural network on manually annotated structures in volumes acquired before the opening of the dura mater and then we applied it to segment corresponding structures in different surgical phases. Finally, the obtained masks are used to register US volumes acquired at multiple resection stages. Results Our method reduces the mean target registration error (mTRE) between volumes acquired before the opening of the dura mater and during resection from 3.49 mm (± 1.55 mm) to 1.36 mm (± 0.61 mm). Moreover, the mTRE between volumes acquired before opening the dura mater and at the end of the resection is reduced from 3.54 mm (± 1.75 mm) to 2.05 mm (± 1.12 mm). Conclusion The segmented structures demonstrated to be good candidates to register US volumes acquired at different neurosurgical phases. Therefore, our solution can compensate brain shift in neurosurgical procedures involving intraopera- tive US data. Keywords Ultrasound · Image registration · Image segmentation · Convolutional neural network · Image-guided surgery Introduction In brain surgery for tumor removal, neurosurgeons usu- ally plan the intervention on pre-surgical images. The most widely used modality for neurosurgery planning is mag- netic resonance imaging [1, 2, 3]. To help physicians with the resection, neuronavigation systems can be used to link * Luca Canalini luca.canalini@mevis.fraunhofer.de preplanning data positions to patient’s head locations. By tracking fiducial markers placed on the patient’s skull and Fraunhofer MEVIS, Institute for Digital Medicine, Bremen, surgical tools, an optical system computes an image-to- Germany patient transformation. Consequently, by pin-pointing an University of Bremen, Bremen, Germany intracranial location, neurosurgeons can obtain the same Department of Neurosurgery, University Hospital position in the preplanning images. However, initialization Knappschaftskrankenhaus, Bochum, Germany inaccuracies of the neuronavigation system may invalidate Surgical Planning Laboratory, Brigham and Women’s the image-to-patient transformation, affecting the quality of Hospital, Harvard Medical School, Boston, USA Vol.:(0123456789) 1 3 1698 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 these images since the beginning of the resection [4]. Addi- degraded US data with preplanning imaging, it would be tionally, after resection starts, the preplanning data become useful to register first the pre-surgical MRI data with US even more unreliable due to the brain shift phenomenon: volumes acquired before resection, in which few anatomical Structures observed in preplanning images don’t remain in modifications occurred. Afterward, intraoperative US data the same conformation and position during tumor removal acquired at the first stage of the surgery (which therefore has [4]. As a consequence, the probability that pathological ele- a higher quality) may be registered to subsequent US acqui- ments are missed increases, reducing the survival rates of the sitions, and then the preplanning data could be registered to operated patients [5, 6]. To overcome this problem, intraop- those by utilizing a two-step registration [19]. In this con- erative images can be acquired [7]: They provide an updated text, neuronavigation systems could be used to co-register view of the ongoing procedure and hence compensate the intraoperative images acquired at different surgical phases. brain shift effects. A solution is represented by intraoperative However, these devices are prone to technical inaccuracies, magnetic resonance imaging (iMRI) [8]. It is demonstrated which affect the registration procedure from the beginning to be a good option [9] since its high image quality provides of the resection [4]. Moreover, the available neuronaviga- good contrast in anatomical tissue even during the resection tion systems usually offer only a rigid registration, which [10]. However, the high costs of iMRI and the architectural is not sufficient to address anatomical changes caused by adaptations required in the operating room seem to prevent brain shift. In our work, we propose a deformable method to this modality from being deployed more widely. A valid improve the registration of US volumes acquired at different alternative is given by intraoperative ultrasound (iUS) [11, stages in brain surgery. 12, 13]. Some authors reported that for certain grades of Few solutions have been proposed to improve the US–US glioma, iUS is equal or even superior to iMRI in providing registration during tumor resection in neurosurgery. In [20], good contrast between tumor and adjacent tissues [14, 15]. the authors studied the performance of the entropy-based Moreover, US represents a lower-cost solution compared to similarity measures joint entropy (JE), mutual information MRI. In our work, we focus on intraoperative 3D ultrasound (MI) and normalized mutual information (NMI) to register used in neurosurgical procedures. ultrasound volumes. They conducted their experiments with The more the resection advances, the more the initial two volumes of an US calibration phantom and two volumes acquisition of iUS becomes unreliable due to increased of real patients, acquired before the opening the dura mater. brain shift effects. Therefore, an update of the intraopera- Different rigid transformations were applied on each volume, tive imaging may be required. In [16], the authors acquired and the target registration error (TRE) was used as evalua- US volumetric data in subsequent phases of glioblastoma tion metric. The accuracy of the registration was examined resections in 19 patients and compared the ability to dis- by comparing the induced transformation to move the origi- tinguish tumor from adjacent tissues at three different steps nal images to the deformed ones, with the transformation of the procedure. According to their observations, the 3D defined by the entropy-based registration method. In both of images acquired after opening the dura, immediately before the datasets, NMI and MI outperformed JE. In another work starting the resection (we indicate this phase as before resec- [21], the same authors developed a non-rigid registration tion), are highly accurate for delineating tumor tissue. This based on free-form deformations using B-splines and using ability reduces during resection, i.e., after that most of the normalized mutual information as a similarity measure. Two resection has been performed but with residual tumor, and datasets of patients were used, where for each case a US after resection, i.e., when all the detected residual tumor volume was acquired before the opening of the dura, and has been removed. In fact, the resection procedure itself is one after (but prior to start of tumor resection). To assess responsible for creating small air bubbles, debris and blood. the quality of the registration, the correlation coefficient was Besides this, a blood clotting inducing material commonly computed within the overlap of both volumes and before and used during neurosurgical procedures causes several image after registration. Furthermore, these authors segmented the artefacts [14, 17]. Successive studies regarding other types volumetric extension of the tumor with an interactive multi- of tumor resection confirmed the degradation of image scale watershed method and measured the overlap before and quality in US during resection [18]. Therefore, it would be after the registration. One limitation of the aforementioned helpful to combine US images acquired during and after two studies is that no experiment is conducted on volumes resection with higher-quality data obtained before resec- acquired at different stages of the surgical procedure, but tion. Such a solution may also be beneficial to improve the only before the resection actually begins. In a real scenario, registration of intraoperative data with higher-quality pre- neurosurgeons use intraoperative data to find residual tumor planning MRI images. In fact, instead of combining directly after a first resection, which is conducted after the opening of the dura mater. One of the first solutions to register US data obtained Surgicel (Ethicon, Somerville, NJ). at subsequent surgical phases utilized an intensity-based 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1699 registration method to improve the visualization of volu- volumes: For this set, they were able to reduce the mTRE metric US images acquired before and after resection [22]. from 3.25 mm to 1.54 mm. Then, they applied the same The results are computed for 16 patients with different method on the BITE dataset and reduced the initial mean grades of brain supratentorial tumor and located in various error to 1.52 mm. Moreover, they tested their approach on lobes. Half of the cases were first operations, and half were the more recent RESECT dataset [14]. By using the same re-operations. Pre-resection volumes were acquired on the method on the pre- and post-resection volumes, the mTRE dura mater, or either directly on the cortex (or tumor) or was reduced from 3.55 to 1.49 mm. on a dura repair patch. The post-resection ultrasound was Our solution proposes a segmentation-based registration used to find any residual tumor. The authors used mutual approach to register US volumes acquired at different stages information as similarity measure for a rigid registration. In of neurosurgical procedures and compensate brain shift. A the further non-rigid transformation, the correlation coef- few approaches already applied segmentation methods on ficient objective function was used. To correctly evaluate US data to then register MRI and iUS [27, 28]. Our solu- their findings, for each of the 16 cases, a neuroradiologist tion represents the first segmentation-based method aimed at chose 10 corresponding anatomic features across US vol- US–US volumes registration. Our approach includes a deep- umes. The initial mean Euclidean distance of 3.3 mm was learning-based method, which automatically segments ana- reduced to 2.7 mm with a rigid registration, and to 1.7 mm tomical structures in subsequent US acquisitions. We chose with the non-rigid registration. The quality of the alignment to segment the hyperechogenic structures of the sulci and of the pre- and post-resection ultrasound image data was falx cerebri, which remain visible during the resection and also visually assessed by a neurosurgeon. Afterward, an thus represent good corresponding elements for further reg- important contribution to neurosurgical US–US registra- istration. In the following step, parametric and nonparamet- tion came by the release of the BITE dataset [23], in which ric methods use the generated masks to register US volumes pre- and post-resection US data are publicly available with acquired at different surgical stages. Our solution reduces relative landmarks to test registration methods. One of the the initial mTRE for US volumes acquired at subsequent first studies involving BITE dataset came from [17]. The acquisitions in both RESECT and BITE datasets. authors proposed an algorithm for non-rigid REgistration of ultraSOUND images (RESOUND) that models the deforma- tion with free-form cubic B-splines. Normalized cross-corre- Materials and methods lation was chosen as similarity metric, and for optimization, a stochastic descendent method was applied on its derivative. Datasets Furthermore, they proposed a method to discard non-corre- sponding regions between the pre- and post-resection ultra- We used two different public datasets to validate our seg- sound volumes. They were able to reduce the initial mTRE mentation-based registration method. Most of our experi- from 3.7 to 1.5 mm with a registration average time of 5 s. ments are conducted on the RESECT dataset [14], including The same method has been then used in [19]. In a composi- clinical cases of low-grade gliomas (Grade II) acquired on tional method to register preoperative MRI to post-resection adult patients between 2011 and 2016 at St. Olavs University US data, they applied the RESOUND method to register Hospital, Norway. There is no selection bias, and the dataset first pre- and post-resection US images. In another solution includes tumors at various locations within the brain. For [24], the authors aimed to improve the RESOUND algo- 17 patients, B-mode US-reconstructed volumes with good rithm. They proposed a symmetric deformation field and an coverage of the resection site have been acquired. No blood efficient second-order minimization for a better convergence clotting agent, which causes well-known artefacts, is used. of the method. Moreover, outlier detection to discard non- US acquisitions are performed at three different phases of corresponding regions between volumes is proposed. The the procedure (before resection, during and after resection), BITE mean distance is reduced to 1.5 mm by this method. and different US probes have been utilized. This dataset is Recently, another method to register pre- and post-resection designed to test intra-modality registration of US volumes US volumes was proposed by [25]. The authors presented a and two sets of landmarks are provided: one to validate the landmark-based registration method for US–US registration registration of volumes acquired before, during and after in neurosurgery. Based on the results of 3D SIFT algorithm resection, and another set that increases the number of land- [26], images features were found in image pairs and then marks between volumes obtained before and during resec- used to estimate dense mapping through the images. The tion. Regarding both sets, the reference landmarks are taken authors utilized several datasets to test the validity of this in the volumes acquired before resection and then are uti- method. A private dataset of nine patients with different lized as references to select the corresponding landmarks types of tumor was acquired, in which 10 anatomical land- in US volumes acquired during and after tumor removal. marks were selected per case, in both pre- and post-resection In the RESECT dataset, landmarks have been taken in the 1 3 1700 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 1 Web-based annotation tool. While contouring the structures of US volumes. The annotation tool is accessible by common web of interest on the axial view (yellow line in the left frame), the seg- browsers, and it has been used to obtain and then review the manual mentation process can be followed in real time on the other two views annotation proximity of deep grooves and corners of sulci, convex in the US volumes acquired before resection of RESECT points of gyri and vanishing points of sulci. The number of dataset. Pathological tissue was excluded from the manual landmarks of the first and second sets can be, respectively, annotation since it is progressively removed during resection found in the second column of Tables 4 and 5. and correspondences could not be found in volumes acquired In addition to RESECT volumes, BITE dataset is also at subsequent stages. On the contrary, we focused on other utilized to test our registration framework [23]. It contains hyperechogenic (with an increased response—echo—during 14 US-reconstructed volumes of 14 different patients with ultrasound examination) elements such as the sulci and falx an average age of 52 years old. The study includes four low- cerebri. We consider these elements valid correspondences grade and ten high-grade gliomas, all supratentorial, with the because the majority of them has a high chance to remain majority in the frontal lobe (9/14). For 13 cases, acquisitions visible in different stages of the procedure. are obtained before and after tumor resection. Ten homolo- The manual segmentations were performed on a web- gous landmarks are obtained per volume, and initial mTRE based annotation tool. As shown in Fig. 1, each RESECT are provided. The quality of BITE acquisitions is lower with volume can be simultaneously visualized on three different respect to RESECT dataset, mainly because blood clotting projections planes (axial, sagittal and coronal). The segmen- agent is used, creating large artefacts [14]. tation task is accomplished by contouring each structure (yellow contour in the first frame of Fig.  1) of interest on the Methods axial view. The drawn contours are then projected onto the other two views (blue overlay in the second frames of Fig. 1) We used MeVisLab for implementing (a) an annotation tool so that a better understanding of the segmentation process is for medical images, (b) a 3D segmentation method based on possible by observing the structures in different projections. a CNN and (c) registration framework for three-dimensional The annotation process can be accomplished very easily and data. smoothly, and 3D interpolated volumes can be then obtained by rasterizing the drawn contours. As shown in Fig. 1, the Manual segmentation of anatomical structures contours are well defined in the axial view but several ele- ments are not correctly included if considering the other The first step of our method consists of the 3D segmenta- two views. This is a common issue that we found in our tion of anatomical structures in different stages of US acqui- annotation, which would require much time and effort to be sitions. Both RESECT and BITE datasets are used to test corrected. However, we decided to have a maximum annota- registration algorithms and no ground truth is provided for tion time of 2 h per volume. The obtained masks correctly validating segmentation methods. Therefore, we decided to include the major structures of interest, but some elements conduct a manual annotation of the structures of interest such as minor sulci are missing. Despite the sparseness of our dataset, we expect our training set to be good enough to train our model to segment more refined structures of interest [29, 30]. https ://www.mevis lab.de/mevis lab/. 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1701 Table 1 Rating of the manual Volumes 1 2 3 4 6 7 12 14 15 16 17 18 19 21 24 25 27 annotations Ranking 2 2 3 2 2 3 2 3 2 3 2 2 2 2 2 2 2 After the contours of the main structures of interest were manually drawn, the neurosurgeons rated them according to criterion defined in the session “Manual segmentation of anatomical structures”. The criterion is defined taking into account the sparseness of the manual annotations. A point equal to 4 is given to the annotations where many of the main structures of interest are missing. On the contrary, if minor structures of interest (i.e., minor sulci) are missing but the major ones are correctly included, the best point of 1 is given The manual annotation was performed by the main 1 to 15, the validation one the volumes from 16 to 21 and author of this work (L.C.), who has two years of experi- the test one the volumes 24, 25, 27. ence in medical imaging and almost one year in US imag- After having found the best model to segment anatomi- ing for neurosurgery. Then, a neurosurgeon with many years cal structures in pre-resection US volumes, we applied it to of experience in the use of US modality for tumor resec- segment ultrasound volumes acquired at different surgical tion reviewed and rated the manual annotations, by taking phases. into account the sparseness of the dataset. According to the defined criteria, each volume could be rated with a point Registration between 4 and 1. More precisely, a point equal to 1 means that the main structures (falx cerebri and major sulci) are The masks automatically segmented by our trained model correctly segmented, and only minor changes should be done are used to register US volumes. The proposed method is to exclude parts of no interest (i.e., slightly over-segmented a variational image registration approach based on [31]: elements). A point equal to 2 indicates that the main struc- The registration process can be seen as an iterative opti- tures are correctly segmented, but major corrections should mization algorithm where the search of the correct regis- be done to exclude structures of no interest. A point equal to tration between two images corresponds to an optimization 3 indicates that main structures were missed in the manual process aimed at finding a global minimum of an objec - annotations, which, however, are still acceptable. A score of tive function. The minimization of the objective function is 4 means that a lot of major structures are missing; therefore, performed according to the discretize-then-optimize para- that annotation for the volume of interest cannot be accepted. digm [31]: The discretization of the various parameters is The neurosurgeon evaluated the annotations by looking at followed by their optimization. The objective function to be the projected structures on the sagittal and coronal views of minimized is composed of a distance measure, which quan- the drawn contours. Table 1 shows the results of the rating tifies the similarity between the deformed template image process for the volumes of interest. and the reference one, and a regularizer, which penalizes undesired transformations. In our approach, the binary 3D Segmentation masks generated by the previous step are used as input for the registration task, which can be seen as mono-modality A convolutional neural network aimed for a volumetric seg- intensity-based problem. Therefore, we chose the sum of mentation is trained on the manual annotations. We utilized squared differences (SSD) as a similarity measure, which is the original 3D U-net [29] architecture, in which few modi- usually suggested to register images with similar intensity fications were made with respect to the original implementa- values. Moreover, to limit the possible transformations in the tion: (a) The analysis and synthesis paths have two resolution deformable step, we utilized the elastic regularizer, which steps and (b) before each convolution layer of the upscaling is one of the most commonly used [31]. In our method, the path a dropout with a value of 0.4 is used in order to prevent choice of the optimal transformation parameters has been the network from overfitting. The training is conducted with conducted by using the quasi-Newton l-BGFS [32], due a patch size of (30,30,30), padding of (8,8,8) and a batch size to its speed and memory efficiency. The stopping criteria of 15 samples. The learning rate was set to 0.001, and the for the optimization process were empirically defined: the best model saved according to best Jaccard index computed minimal progress, the minimal gradient and the relative one, on 75 samples every 100 iterations. The architecture modi- the minimum step length were set equal to 0.001, and the fications, as well as the training parameters, were chosen by maximum number of iterations equal to 100. conducting several experiments and selecting those provid- Our registration method aims to provide a deformable ing the best results. As training, validation and test sets, we solution to compensate for anatomical changes happening split the seventeen volumes acquired before resection, which during tumor resection. As commonly suggested for meth- we annotated in the manual annotation. The split has been ods involving non-rigid registration tasks [31], the proposed done as follows: The training set includes the volumes from solution includes an initial parametric registration used then 1 3 1702 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 to initialize the nonparametric one. First of all, the para- assessment of the generated masks is performed. Moreover, metric approach utilizes the information provided by the the over-segmented elements are expected to have a mean optical tracking systems as an initial guess. Based on this intensity value as close as possible to the one of the manu- pre-registration, a two-step approach is conducted, includ- ally annotated structures. To verify this, we compared the ing a translation followed then by a rigid transformation. In mean intensity values of the manual annotations and the this stage, to speed the optimization process, the images are automatically generated masks. registered at a resolution one-level coarser compared to the Regarding US volumes acquired during and after resec- original one. Then, the information computed during the tion, no manual annotation was obtained, so no DICE index parametric registration is utilized as the initial condition for could be computed. Therefore, to be sure that structures of the nonparametric step. In this stage, to reduce the chance to interest are correctly segmented, we show that the masks reach a local minimum, a multilevel technique is introduced: of the three stages of US data segmented by our trained the images are registered at three different scales, from a model (a) are strongly correlated in terms of volume exten- third-level to one-level coarser. As output of the registration sion by computing the Pearson correlation coefficient and step, the deformed template image is provided. (b) include structures with a mean intensity value similar to the manual annotations. Secondly, we conduct a visual inspection of the results, which is helpful to verify whether Evaluation or not corresponding anatomical structures are segmented in these stages. Segmentation Given the fact that our annotations are not publicly avail- able, only a qualitative comparison is made with respect to In Table 1, we can see that no annotation received the best other methods which also proposed a US segmentation solu- score of 1, but all of them have some imperfections. How- tion in the context of neurosurgery [27, 28, 30, 33]. ever, none of the manually annotated masks was scored with 4. Consequently, we can consider our annotations as a sparse Registration ground truth in which only the main hyperechogenic struc- tures of interest are included. Regarding this, CNNs trained The transformations and deformation fields computed in on a sparse dataset already proved to be able to segment the parametric and nonparametric step are then applied to more refined and numerous structures respect to the sparse the landmarks contained in the datasets. The TRE values training set [29, 30]. Therefore, we expect our annotations before and after registration are provided per each patient, to be good enough to train the CNN model in order to gen- with the measure of the closest and farthest couple of points, erate meaningful structures for guiding the further registra- and mean and standard deviation values are also given per tion step. In fact, the registration step will give an important each set of landmarks. A visual inspection of the registration feedback about the quality of the generated masks: For our results is also shown, in which the initial registration based purposes, the segmented structures are meaningful if they on the information of the optical tracker can be compared correctly guide the registration method. In addition to this, with the results obtained by our method. Moreover, a com- an analysis of the segmentation results will be provided, as parison with previous solutions is provided. Regarding this, described in the following section. some methods have been proposed to register BITE volumes Regarding US volumes acquired before resection, no [17, 19, 24, 25], but none of them except one [25] provided a ground truth is available for the structures not contained generalized solution able to register volumes of both datasets in the manual annotations. Consequently, the DICE coef- (BITE and RESECT). On the contrary, our method provides ficients are computed by including only the automatically an approach valid for both two datasets. For the RESECT segmented elements with correspondences to manual dataset, the authors of [25] proposed a solution only for vol- annotations and by discarding elements having no counter- umes acquired before and after resection. Our approach is part in manually annotated data. This measure is useful to the first one to be applied to the volumes acquired before and verify whether the main structures of interest are correctly during resection of RESECT dataset; therefore, no compari- segmented by the trained model. As further information, son is available for this specific set. we also provide the DICE coefficients computed without The capture range of our method is also computed. We excluding any structure. These values would be useful for define the capture range as the largest initial misalignment a deeper analysis of our algorithm but, as aforementioned, within which our algorithm still converges to a solution for they may not be so indicative for our purposes due to the 80% of the cases. To evaluate it, we started the registra- sparseness of our dataset. Furthermore, the automatically tion from multiple starting misalignments and we checked generated masks should also include more refined elements whether or not the method converged to a solution. Then, we than the original ground truth. To verify this, a first visual 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1703 Fig. 2 Segmentation and landmarks. Original intensity volumes points of gyri, and vanishing points of sulci. We chose to segment where the generated masks (in green) and RESECT landmarks (pur- sulci falx cerebri, and therefore, we can see how the landmarks are ple squares) are overlaid. In RESECT dataset, landmarks have been closely located to the segmented structures taken in proximity of deep grooves and corners of sulci, convex Table 2 DICE coefficients Volumes 1 2 3 4 6 7 12 14 15 16 17 18 19 21 24 25 27 for volumes acquired before surgery (a)  Dice % 68 62 57 76 71 56 78 76 78 61 62 70 70 63 74 68 69 (b)  Dice % 62 46 28 59 50 46 67 67 63 53 45 35 61 42 58 44 51 (a) Refers to the DICE coefficient computed by considering only the structures with a counterpart in the manual annotations. The method shows evidence of being able to properly segment the anatomical struc- tures considered in the manual annotations. (b) Refers to the DICE values for the whole set of the automati- cally segmented structures computed the value of the capture range by using as distance Regarding US data acquired before resection, Table 2a measure the mTRE computed on the available landmarks. provides the DICE coefficients computed between the manually segmented structures and the corresponding masks generated by our trained model. In Table  2b, the DICE coefficients for the whole set of generated masks Results (without excluding the elements not included in the man- ual annotation) are given. Furthermore, the first and third Segmentation bars in Fig. 3 show that the structures automatically seg- mented in pre-resection volumes have a mean intensity Figure 2 shows an example of a segmented structure in a value very similar to those chosen in the manual annota- volume acquired before resection. It can be seen that the tions. A similar consideration can be made for the ele- generated masks cover the locations where landmarks were ments considered as background (second and fourth bars acquired. In fact, we decided to segment sulci and falx cer- in Fig. 3). Qualitative results also confirm this evidence. ebri, which are the anatomical elements taken into account Figure  4 shows four examples of automatically gener- to acquire the majority of the landmarks in the RESECT ated masks in comparison with the corresponding manual dataset. annotations. In most of the cases, our method correctly 1 3 1704 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 3 Intensity values of the masked ultrasound volumes. This graph masked volumes have in all the cases a similar mean value, higher presents the mean intensity values of the masked ultrasound vol- than the excluded areas. This is meaningful since our elements of umes (first, third, fifth, seventh bars) acquired at the three stages, and interest are the bright (hyperechogenic) structures in the US. On the the mean intensity values of the area excluded by the segmentation contrary, the even numbered bars have a similar mean intensity value, (second, fourth, sixth, eighth bars). For the volumes acquired before lower than the chosen structures. We are not interested in hypoecho- resection, volumes masked with manual annotation and elements genic structures, with look darker in the US acquisitions segmented by the neural network are compared (first four bars). The segments refined elements which were not included in the Registration manual annotation due to timing restriction see “Manual segmentation of anatomical structures”. Violet squares The mean time required by the registration tasks is given in highlight some examples of these structures. Though, in Table 3, together with the mean time required by each vol- several cases, the neural network wrongly segments patho- ume to be segmented by the trained model. All experiments logical tissue which we excluded from the manual annota- are made on a computer equipped with an Intel Core i7 and tions (see blue squares in Fig. 4d). a GeForce GTX 1080 (8 GB). For the volumes acquired during and after resec- By relying on the automatically generated masks in the tion, a strong correlation exists between the extension segmentation step, we registered the US volumes acquired at of their masks segmented by the neural network and of different surgical stages. First, the volumes acquired before the volumes before resection. In fact, the Pearson coef- and during resection are registered. Then, our algorithm is ficient between the masks of US data acquired before applied to volumes acquired during and after resection. The and during resection has a value of 0.90, and a value of computed deformation fields are applied to the landmarks 0.91 for those of pre- and post-removal. As for the US provided in the RESECT dataset, and the results after reg- data acquired before resection, Fig. 3 shows that the ana- istration are shown in Table 4 (for volumes acquired before tomical structures segmented at the different stages have and during resection) and in Table  5 (volumes acquired a mean intensity similar to the manual annotation (last before and after resection). Regarding the results in Table 5, four bars). Therefore, we can state that our segmentation the registration of the landmarks is performed by concat- method, applied to volumes acquired at different stages, enating two different transformations: the one computed segment structures related to each other in terms of vol- before–during US volumes together with the one for vol- umes extension and mean intensity values. Then, visual umes acquired during and after resection (see Fig. 7 for a results in Figs. 5 and 6 confirm the evidence of the quan- more detailed description). titative results, showing that our model trained on a stage As it can be seen in both tables, both parametric and of US correctly segments analogous elements in volumes nonparametric methods reduce the initial mean registra- acquired at different stages. However, qualitative results tion errors provided in the RESECT dataset. In Table  4, in Fig. 6 also show that our method often detects resection it can be noticed that the proposed methodology improves cavities, which have no corresponding structures in the the initial mTRE more than 2 mm, by decreasing the mean pre-resection volumes. errors for each patient. For the second registration tasks, our 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1705 Fig. 4 Segmentation of ultrasound volumes acquired before resec- displayed. In each example, a pointer (intersection of yellow cross- tion. In each example, the axial, sagittal and coronal views are shown ing lines) highlights the same volume position in the three views. Our in the first, second and third row, respectively. In the first column, method correctly segments the main structures. Moreover, structures the original ultrasound volume is exhibited, in the second column, wrongly not included in the manual annotations are correctly detected the manual annotation performed on the axial view and projected in by the trained neural network (purple squares). However, in image the other two views is shown, in the third column, the segmentation d, pathological tissue correctly excluded in the original masks is result obtained by the 3D U-net for the same volume of interest is wrongly segmented by our method (blue squares in axial view) method reduces the mean registration error by nearly 1.5 mm in Table 6, with a comparison to previously proposed solu- (Table 5). Visual examples provided in Figs. 8 and 9 also tions (last section of Table 6). As it can be seen, also for this confirm the numerical results. The images show the fixed dataset the initial mTRE is reduced by both parametric and volumes with the related segmentation (in red), together nonparametric registration approaches. with the mask of the moving volumes (in green). By com- The value of the capture range of our method is equal to paring the overlay before and after registration, we highlight 6.25 mm. the registration improvements by coloring in yellow the cor- rect overlay of the two masks. Regarding the results on the RESECT dataset, only those obtained for volumes acquired Discussion before and after resection can be compared with another solution [25] (see Table 5). The manual annotations, even if sparse, are good enough to Our segmentation-based registration method is then train the CNN model to segment the anatomical structures applied on BITE dataset, directly registering volumes of interest, as shown by the DICE coefficients in Table  2. acquired before and after resection. The results are available Moreover, Fig.  4 shows that automatically generated 1 3 1706 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 5 Segmentation of ultrasound volumes acquired during resec- containing one intensity volume together with the generated mask. It tion. After having trained the neural network on the stage before appears clear how the main hyperechogenic structures are correctly resection, we applied it to ultrasound volumes acquired during resec- included in the segmentation tion. This figure shows four examples of segmentation results, each segmentations are more precise than the manual annota- consider to separately segment pathological tissue and then tions, with a better contours refinement and larger number exclude it during registration. A similar consideration can be of identified structures. However, some pathological tissues made for the resection cavities in volumes acquired during are wrongly segmented by our method (see Fig. 4d). This and after resection, which appear as bright as sulci and are may be due to the fact that in US data the glioma of grade wrongly segmented by the proposed method (Fig. 6). Fur- II appears as hyperechogenic structures, with an intensity thermore, from a qualitative comparison with other segmen- similar to the elements of interest. In future work, we could tation methods involving US data, we can highlight some 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1707 Fig. 6 Segmentation of ultrasound volumes acquired after resection. how the main hyperechogenic structures are correctly included in After having trained the neural network on the stage before resection, the segmentation. In the last two examples (second row), we see how we applied it to ultrasound volumes acquired after resection. This fig- resection cavities (appearing hyperechogenic on US) are segmented ure shows four examples of segmentation results, each containing one by the 3D U-net, even they have no counterparts in the pre-resection intensity volume together with the generated mask. It appears clear stage advances of our approach. First of all, with respect to [27, The second important contribution of this work is the reg- 33], a higher number of anatomical structures are included istration of US volumes acquired at die ff rent surgical stages. in our manual annotations. Therefore, the potential range First of all, the segmentation method gives evidence of being of clinical scenarios in which our method could be applied able to generate meaningful masks to guide the registra- might be wider. Secondly, a trained neurosurgeon has clini- tion task. In fact, the proposed registration method is able to cally validated the manual annotations (Table 1). This is not reduce the mTREs of three sets of volumes from two differ - the case for other segmentation-based methods [30, 28], in ent datasets (Table 4, 5, 6) by using the corresponding ana- which no precise rating of the manual masks is provided. tomical structures previously segmented. From numerical 1 3 1708 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Table 3 Mean time in seconds per task resection of RESECT dataset (Table 5). The mTRE obtained by the aforementioned approach is better than our method, Mean time (in s) per each task which, however, is the first one to provide results for the vol- Segmentation (infer- Total registration Total registration time umes obtained before and during resection of the RESECT ence) time (US before–dur- (US during–after) dataset. In this set of volumes, our registration performs ing) quite well, reducing the initial mTRE to 1.36 mm. 1.28 28.55 29.40 Regarding the BITE dataset, our algorithm improves the initial registration (see Table 6), proving not to be over-tuned With segmentation, we indicate the inference process in which the on RESECT dataset. Note that in contrast to our approach, trained model generates the mask of a volume given in input. The other two values are related to the registration tasks, including the all other methods compared in Table 6 have only been tested time of both the parametric and nonparametric approaches on the BITE dataset. Thus, the results may be over-tuned on this limited set of volumes and the approaches could lack and visual results, we can notice that even if minor corre- generalization. On the contrary, our solution is the second sponding segmented elements are missing in volume pairs, one after [25] to propose a more generalized method, which our method is able to reduce the initial registration errors. has been tested on registering the volumes of both RESECT However, in the case of volumes acquired after removal, and BITE datasets. Therefore, our method is validated on a resection cavities may be segmented by our method due to larger number of US acquisitions, providing a more gener- their intensity similar to the sulci. Consequently, the mTRE alized solution. Nevertheless, there might be some reasons in Table  5 is reduced less with respect to Table  4, since why a few other approaches have smaller average mTREs these structures have no or few corresponding elements in for the BITE dataset (last section of Table 6). First of all, a volumes acquired in previous steps. This is a limiting factor numerical impacting factor for our results comes from case of our registration method, which is completely based on 12, where the TRE increases from 10.54 up to 11.08 mm, the masks generated by our trained model. In future work, affecting the overall result. The capture range of our method we could try to segment such structures and exclude them is too low to register this volumes pair, which has a very during the registration. Only another work [25] focused on large initial misalignment. In future work, we could improve the registration of US volumes acquired before and after the results by performing an initial registration which could Table 4 Registration errors on Mean distance (range) in mm before versus during (RESECT dataset) RESECT dataset Patient Number of Mean initial distance Mean distance after para- Mean distance after landmarks metric registration nonparametric registra- tion 1 34 2.32 (1.49–3.29) 0.93 (0.31–1.73) 0.89 (0.22–1.57) 2 16 3.10 (1.79–5.19) 1.54 (0.37–3.58) 1.69 (0.71–4.19) 3 17 1.93 (0.67–3.02) 1.20 (0.36–2.47) 1.14 (0.24–2.45) 4 19 4.00 (3.03–5.22) 0.89 (0.31–1.86) 0.83 (0.24–1.65) 6 21 5.19 (2.60–7.18) 1.95 (0.63–3.71) 1.80 (0.58–3.61) 7 22 4.69 (0.94–8.16) 2.50 (1.24–5.78) 2.39 (1.15–5.86) 12 24 3.39 (1.74–4.81) 1.57 (0.43–3.20) 1.58 (0.44–3.36) 14 22 0.71 (0.42–1.59) 0.52 (0.09–1.15) 0.52 (0.12–0.93) 15 21 2.04 (0.85–2.84) 0.80 (0.28–1.44) 0.73 (0.18–1.31) 16 19 3.19 (1.22–4.53) 1.52 (0.95–2.21) 1.40 (0.75–2.43) 17 17 6.32 (4.65–8.07) 2.93 (1.67–4.46) 2.51 (1.14–4.03) 18 23 5.06 (1.55–7.44) 1.75 (0.70–3.04) 1.29 (0.46–2.81) 19 21 2.06 (0.42–3.40) 1.93 (0.20–3.19) 1.33 (0.48–2.67) 21 18 5.10 (3.37–5.94) 1.27 (0.19–3.53) 1.22 (0.19–3.46) 24 21 1.76 (1.16–2.65) 0.89 (0.18–2.17) 0.81 (0.08–2.07) 25 20 3.60 (2.19–5.02) 3.56 (2.09–5.14) 2.27 (1.04–3.92) 27 16 4.93 (3.61–7.01) 0.77 (0.24–1.35) 0.71 (0.19–1.26) Mean value Mean value Mean value 3.49 ± 1.55 1.56 ± 0.82 1.36 ± 0.61 Mean registration errors between ultrasound volumes acquired before and during resection. Original dis- tances are compared to the results obtained with our segmentation-based registration 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1709 Table 5 Registration errors on RESECT dataset Mean distance (range) in mm before vs. after (RESECT Dataset) Patient Number of landmarks Mean initial distance Mean distance after parametric Mean distance after registration nonparametric registra- tion 1 13 5.80 (3.62–7.22) 2.69 (0.93–4.08) 2.67 (0.75–4.18) 2 10 3.65 (1.71–6.72) 2.32 (0.90–4.25) 2.18 (0.55–3.93) 3 11 2.91 (1.53–4.30) 1.63 (0.82–2.48) 1.53 (0.82–2.25) 4 12 2.22 (1.25–2.94) 1.05 (0.46–1.95) 1.06 (0.30–2.05) 6 11 2.12 (0.75–3.82) 1.91 (0.47–3.06) 1.88 (0.24–2.93) 7 18 3.62 (1.19–5.93) 2.29 (0.92–4.13) 2.08 (0.70–3.93) 12 11 3.97 (2.58–6.35) 1.60 (0.54–4.73) 1.44 (0.61–4.51) 14 17 0.63 (0.17–1.76) 0.63 (0.11–1.84) 0.57 (0.09–1.52) 15 15 1.63 (0.62–2.69) 0.85 (0.12–2.13) 0.88 (0.23–2.38) 16 17 3.13 (0.82–5.41) 2.40 (0.61–4.70) 2.14 (0.79–4.35) 17 11 5.71 (4.25–8.03) 3.82 (2.36–6.68) 3.40 (1.91–6.28) 18 13 5.29 (2.94–9.26) 2.19 (1.14–4.32) 1.51 (0.65–2.92) 19 13 2.05 (0.43–3.24) 4.00 (1.42–14.27) 3.97 (0.91–15.29) 21 9 3.35 (2.34–5.64) 1.23 (0.29–3.20) 1.18 (0.28–3.16) 24 14 2.61 (1.96–3.41) 0.86 (0.18–2.26) 0.79 (0.13–2.02) 25 12 7.61 (6.40–10.25) 5.75 (4.39–8.34) 3.88 (2.74–6.07) 27 12 3.98 (3.09–4.82) 3.77 (2.22–5.10) 3.76 (2.24–5.30) Mean value Mean value Mean value 3.54 ± 1.75 2.29 ± 1.37 2.05 ± 1.12 Other methods mTRE after registration [25] 1.49 mm Mean registration errors between ultrasound volumes acquired before and after resection. Original distances are compared to the results obtained with our segmentation-based registration. Moreover, a comparison is made with a previous method proposed to solve this task Fig. 7 Registration of different US volume pairs. Instead of register - from before to after resection volumes is obtained by concatenating ing directly pre-resection US data with those after resection (continu- two different registrations results (US before resection to US during ous line), a two-step method (dotted arrows) is proposed by including resection + US during resection to US after resection) the US volumes acquired during resection. The final transformation increase the capture range of our method. Moreover, the The total time required by each task of our method is limited improvement obtained by our method might be due visible in Table 3: The segmentation step requires 1.28 s to the lower quality of the BITE dataset with respect to the and 28.55 s (before/during) and 29.40 s (during/after) that RESECT volumes, which is used for training the segmenta- are needed to register the generated 3D masks. In addi- tion approach. Since our registration method is based on the tion to this, we should also take into account the time to generated masks, it is almost impossible for the registration reconstruct a 3D US volumes from 2D images, which is of method to converge to the right solution if the segmented a few seconds [14]. Considering the increase in the brain masks are not accurate enough. shift over the time and the average duration of a neuro- surgical procedure [34], our algorithm is fast enough to 1 3 1710 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Fig. 8 Registration results for before and during resection volumes. mentation overlay according to the original information. The second The images show four examples of registration by combining fixed column displays the overlay of the segmented structures after regis- volumes (during resection) with its segmented structures (in red) and tration. By highlighting in yellow the correct overlap of segmented the segmented elements of moving volumes acquired before resec- structures, we can see how the structures are more aligned after the tion (in green). In the first column of each example, we show the seg- performed registration register US volumes and therefore provides a meaningful Conclusion solution for brain shift. Nevertheless, in future work we could optimize our algorithm in order to speed up the To the best of our knowledge, our solution is the first registration step. one to propose a segmentation-based registration method 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1711 Fig. 9 Registration results for before and after resection volumes. mentation overlay according to the original information. The second These images show four examples of registration by combining fixed column displays the overlay of the segmented structures after regis- volumes (after resection) with its segmented structures (in red) and tration. By highlighting in yellow the correct overlap of segmented the segmented elements of moving volumes acquired before resection structures, we can see how the structures are more aligned after the (in green). In the first columns of each example, we show the seg- performed registration which registers US volumes acquired at different surgical anatomical structures prove to be meaningful elements stages. Our approach provides some important contribu- which can guide the registration of US volumes acquired tions. Regarding the segmentation step, a model based on in the neurosurgical context. In fact, for two different data - a 3D U-Net has been trained on a large number of anatomi- sets of US volumes acquired at different surgical stages, cal structures, whose manual annotations have been vali- the initial mTREs are correctly reduced, demonstrating dated by an experienced neurosurgeon. Even if the training that our solution is not over-tuned for a specific dataset. is performed on a sparse set of annotations, the proposed Moreover, our work is the first one to be applied also on solution is able to automatically segment hyperecho- the US volumes of RESECT dataset acquired during resec- genic elements in US volumes. Moreover, the segmented tion, for which no previous work has been published. 1 3 1712 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 Table 6 Registration errors on BITE dataset Mean distance (range) in mm before vs. after (BITE dataset) Patient Mean initial distance Mean distance after parametric registration Mean distance after nonparametric registra- tion 2 2.30 (0.57–5.42) 1.97 (0.69–4.84) 1.70 (0.51–4.70) 3 3.40 (0.0–5.09) 3.12 (0.43–4.73) 1.45 (0.25–3.48) 4 4.60 (2.96–5.88) 3.62 (2.43–4.81) 2.20 (1.02–3.62) 5 4.11 (2.58–5.52) 3.68 (2.39–5.04) 1.32 (0.46–2.96) 6 2.26 (1.36–3.10) 1.33 (0.56–2.00) 1.17 (0.30–1.64) 7 3.87 (2.60–5.07) 1.27 (0.83–2.36) 1.17 (0.75–1.75) 8 2.51 (0.67–3.93) 2.11 (0.54–3.35) 2.18 (1.05–3.43) 9 2.21 (1.00–4.59) 1.99 (0.64–4.52) 1.95 (0.58–4.54) 10 3.86 (0.98–6.68) 3.81 (1.91–6.22) 3.43 (1.53–5.69) 11 2.74 (0.44–8.22) 2.74 (0.51–7.55) 2.39 (0.35–7.42) 12 10.54 (7.85–13.04) 10.88 (8.28–13.34) 11.08 (8.36–13.72) 13 1.62 (1.33–2.21) 0.73 (0.39–1.43) 0.75 (0.30–1.67) 14 2.19 (0.59–3.99) 1.60 (0.60–2.81) 1.43 (0.41–2.24) Mean value Mean value Mean value 3.55 ± 2.28 2.98 ± 1.80 2.48 ± 2,67 Other methods mTRE after registration [17] 1.50 mm [19] 1.50 mm [24] 1.50 mm [25] 1.54 mm Mean registration errors between ultrasound volumes acquired before and after resection. In this dataset, a fixed number of markers (10 per case) is provided. Original distances are compared to the results obtained with our segmentation-based registration. Moreover, a comparison with pre- viously proposed solutions (with corresponding mTREs) is provided Acknowledgements This work was funded by the H2020 Marie-Curie 3D-navigated ultrasonography in the detection and resection con- ITN TRABIT (765148) project. trol of lesions. Neurosurg Focus 15,10(2):E3 2. Petridis AK, Anokhin M, Vavruska J, Mahvash M, Scholz M (2015) The value of intraoperative sonography in low grade gli- Compliance with ethical standards oma surgery. Clin Neurol Neurosurg 131:64–68 3. Bucholz RD, Smith KR, Laycock KA, McDurmont LL (2001) Ethical approval Two publicly available datasets are used. This article Three-dimensional localization: from image-guided surgery to does not contain any studies with human participants performed by information-guided therapy. Methods 25(2):186–200 any of the authors. 4. Gerard IJ, Kersten-Oertel M, Petrecca K, Sirhan D, Hall JA, Col- lins DL (2017) Brain shift in neuronavigation of brain tumors: a review. Med Image Anal 35:403–420 Conflict of interest The authors declare that they have no conflict of 5. Marko NF, Weil RJ, Schroeder JL, Lang FF, Suki D, Sawaya RE interest. (2014) Extent of resection of glioblastoma revisited: personalized survival modeling facilitates more accurate survival prediction Open Access This article is distributed under the terms of the Crea- and supports a maximum-safe-resection approach to surgery. J tive Commons Attribution 4.0 International License (http://creat iveco Clin Oncol 32(8):774–782 mmons.or g/licenses/b y/4.0/), which permits unrestricted use, distribu- 6. Brown TJ, Brennan MC, Li M, Church EW, Brandmeir NJ, Rak- tion, and reproduction in any medium, provided you give appropriate szawski KL, Patel AS, Rizk EB, Suki D, Sawaya R, Glantz M credit to the original author(s) and the source, provide a link to the (2016) Association of the extent of resection with survival in glio- Creative Commons license, and indicate if changes were made. blastoma: a systematic review and meta-analysis. JAMA Oncol 2(11):1460–1469 7. Almeida JP, Chaichana KL, Rincon-Torroella J, Quinones-Hino- josa A (2015) The value of extent of resection of glioblastomas: References clinical evidence and current approach. Curr Neurol Neurosci Rep 15(2):517 1. Tronnier VM, Bonsanto MM, Staubert A, Knauth M, Kunze S, 8. Kubben PL, ter Meulen KJ, Schijns OE, ter Laak-Poort MP, van Wirtz CR (2001) Comparison of intraoperative MR imaging and Overbeeke JJ, van Santbrink H (2011) Intraoperative MRI-guided 1 3 International Journal of Computer Assisted Radiology and Surgery (2019) 14:1697–1713 1713 resection of glioblastoma multiforme: a systematic review. Lancet postresection 3-dimensional ultrasound for improved visualiza- Oncol 12(11):1062–1070 tion of residual brain tumor. Ultrasound Med Biol 39(1):16–29 9. Nimsky C, Ganslandt O, Von Keller B, Romstöck J, Fahlbusch R 23. Mercier L, Del Maestro RF, Petrecca K, Araujo D, Haegelen C, (2004) Intraoperative high-field-strength MR imaging: implemen- Collins DL (2012) Online database of clinical MR and ultrasound tation and experience in 200 patients. Radiology 233(1):67–78 images of brain tumors. Med Phys 39(6):3253–3261 10. Mittal S, Black PM (2006) Intraoperative magnetic resonance 24. Zhou Hang, Rivaz H (2016) Registration of pre- and postresection imaging in neurosurgery: the Brigham concept. Acta Neurochir ultrasound volumes with noncorresponding regions in neurosur- Suppl 98:77–86 gery. IEEE J Biomed Health Inform 20:1240–1249 11. Unsgaard G, Rygh OM, Selbekk T, Müller TB, Kolstad F, Lind- 25. Machado I, Toews M, Luo J, Unadkat P, Essayed W, George E, seth F, Hernes TA (2006) Intra-operative 3D ultrasound in neu- Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken rosurgery. Acta Neurochir 148(3):235–253 S, Golby A, Wells W 3rd (2018) Non-rigid registration of 3D 12. Unsgaard G, Gronningsaeter A, Ommedal S, Nagelhus Hernes ultrasound for neurosurgery using automatic feature detection and TA (2002) Brain operations guided by real-time two-dimensional matching. Int J Comput Assist Radiol Surg 13(10):1525–1538 ultrasound: new possibilities as a result of improved image quality. 26. Toews M, Wells WM III (2013) Efficient and robust model-to- Neurosurgery 51(2):402–411 image registration using 3D scale-invariant features. Med Image 13. Unsgaard G, Ommedal S, Muller T, Gronningsaeter A, Nagelhus Anal 17(3):271–282 Hernes TA (2002) Neuronavigation by intraoperative three-dimen- 27. Nitsch J, Klein J, Dammann P, Wrede K, Gembruch O, Moltz sional ultrasound: initial experience during brain tumor resection. JH, Meine H, Sure U, Kikinis R, Millerce D (2019) Automatic Neurosurgery 50(4):804–812 and ec ffi ient MRI-US segmentations for improving intraoperative 14. Xiao Y, Fortin M, Unsgård G, Rivaz H, Reinertsen I (2017) REt- image fusion in image-guided neurosurgery. NeuroImage: Clin roSpective Evaluation of Cerebral Tumors (RESECT): a clinical 22:101766 database of pre-operative MRI and intra-operative ultrasound in 28. Rackerseder J, Baust M, Göbl R, Navab N, Hennersperger C low-grade glioma surgeries. Med Phys 44(7):3875–3882 (2018) Initialize globally before acting locally: enabling land- 15. LeRoux PD, Winter TC, Berger MS, Mack LA, Wang K, Elliott JP mark-free 3D US to MRI registration. In: MICCAI (1994) A comparison between preoperative magnetic resonance 29. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, Ronneberger O and intraoperative ultrasound tumor volumes and margins. J Clin (2016) 3D U-Net: learning dense volumetric segmentation from Ultrasound 22(1):29–36 sparse annotation. MICCAI 16. Rygh OM, Selbekk T, Torp SH, Lydersen S, Hernes TA, Unsgaard 30. Göbl R, Rackerseder J, Navab N, Hennersperger C (2019) Fully G (2008) Comparison of navigated 3D ultrasound findings with automatic segmentation of 3D brain ultrasound: learning from histopathology in subsequent phases of glioblastoma resection. coarse annotations. arXiv preprint arXiv1904.08655 Acta Neurochir 150:1033–1042 31. Modersitzki J (2009) Flexible algorithms for image registration. 17. Rivaz H, Collins DL (2015) Near real-time robust nonrigid regis- SIAM, Philadelphia tration of volumetric ultrasound images for neurosurgery. Ultra- 32. Liu DC, Nocedal J (1989) On the limited memory BFGS method sound Med Biol 41(2):574–587 for large scale optimization. Math Program 45:503–528 18. Selbekk T, Jakola AS, Solheim O, Johansen TF, Lindseth F, Rein- 33. Nitsch J, Klein J, H. Moltz J, Miller D, Sure U, Kikinis R, Meine ertsen I, Unsgård G (2013) Ultrasound imaging in neurosurgery: H (2019) Neural-network-based automatic segmentation of cere- approaches to minimize surgically induced image artefacts for bral ultrasound images for improving image-guided neurosurgery. improved resection control. Acta Neurochir 155:973–980 In: SPIE medical imaging 19. Rivaz H, Collins DL (2015) Deformable registration of preop- 34. Trantakis C, Tittgemeyer M, Schneider JP, Lindner D, Winkler D, erative MR, pre-resection ultrasound, and post-resection ultra- Strauss G, Meixensberger J (2003) Investigation of time-depend- sound images of neurosurgery. Int J Comput Assist Radiol Surg ency of intracranial brain shift and its relation to the extent of 10(7):1017–1028 tumor removal using intra-operative MRI. Neurol Res 25:9–12 20. Letteboer MML, Viergever MA, Niessen WJ (2003) Rigid registration of 3D ultrasound data of brain tumours. Elsevier, Publisher’s Note Springer Nature remains neutral with regard to Amsterdam jurisdictional claims in published maps and institutional affiliations. 21. Letteboer MMJ, Willems PWA, Viergever MA, Niessen WJ (2013) Non-rigid registration of 3D ultrasound images of brain tumours acquired during neurosurgery. In: MICCAI 22. Mercier L, Araujo D, Haegelen C, Del Maestro RF, Petrecca K, Collins DL (2013) Registering pre- and 1 3

Journal

International Journal of Computer Assisted Radiology and SurgerySpringer Journals

Published: Aug 7, 2019

There are no references for this article.