Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Building generic anatomical models using virtual model cutting and iterative registration

Building generic anatomical models using virtual model cutting and iterative registration Background: Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods: The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub- volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results: After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions: Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately. Background must be a single averaged model representing all indivi- Spatial information of biological structures has been dual 3D models in the same population of a study used to analyze their functions and to relate their shape [5,11]. An averaged 3D model is a commonly used form changes to various genetic parameters [1-4]. In particu- of a generic 3D model. The creation of an averaged lar, using 3D generic models to statistically analyze model captures information that can be exploited in sta- trends in biological structure changes is an important tistical analysis of real populations. By comparing aver- tool in morphometrics research [1,2,4-10]. In order to aged models and dispersion around them, anatomical be suitable for statistical analysis, a generic 3D model differences can be quantified across groups that differ in some underlying causal or exploratory factors, such as genetics, gender, and drug treatment [3]. The compari- * Correspondence: mxiao@ucalgary.ca Sun Center of Excellence for Visual Genomics, Department of Biochemistry sons can be made between ‘static’ morphological states, and Molecular Biology, Faculty of Medicine, University of Calgary, 3330 where the subjects for comparison are at the same Hospital Drive NW, Calgary, T2N 4N1, Canada © 2010 Xiao et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 2 of 15 http://www.biomedcentral.com/1471-2342/10/5 developmental state or they can be between ‘dynamic’ entail specifically tailored solutions that combine and states, where comparisons are made between various integrate different 3D segmentation algorithms [15] that stages of the subject’s growth. Therefore, a technique may still necessitate manual segmentation on each 2D for creating high throughput 3D generic models is image slice. To redress such persistent drawbacks, we needed to collect and manage large numbers of subjects have developed a generalized virtual dissection-based quickly and efficiently. Such a technique will enable method for creating generic models. In comparison to researchers to discover a wide range of traits to their our previous virtual dissection technique [16], the interest in both natural and clinical settings. Generic 3D method now allows user-define curves for indicating models can also be used in automatic segmentation [1], cutting surfaces and employs enhanced iterative registra- medical education, virtual crash testing, therapy plan- tion to better handle shape variations. In addition, the ning and customizing replacement body parts [11,12]. resulting software is now publicly available. We show Hence, in medical and biological studies, 3D generic that the creation of an averaged model that captures models built for a range of populations are in high spatial information exploitable in statistical analyses of demand. organ shape is facilitated by coupling our generalized In order to create valid 3D generic models from 2D segmentation method with existing automatic image image stacks, more attention should be paid to two registration algorithms [13]. essential steps - image segmentation and image registra- tion. Image registration is the process to find a 3D Methods transformation that can map the same anatomical Materials region from one subject into another one. This process 2D image stacks of mice whole-body micro-computed is essential in clinical and research applications because tomography (μ-CT) scanswereprovidedbythe Mor- researchers often need to compare the same anatomical phometrics Laboratory at theUniversityofCalgary. region scanned using different modalities or at different Eight male and eight female laboratory mice from the time points [13]. Image segmentation is needed when same strain (AWS) were scanned. The female mice we try to retrieve the spatial information of certain bio- were 54 to 61 days old and weighed 16 to 21 grams; logical structures after applying in vivo imaging technol- the male mice were 61 days old and weighed 20 to 25 ogiessuch asMRI.Thisstepisgenerally indispensable grams. All individuals were scanned at a resolution of because 3D image stacks generated from in-vivo scan- 35 μm. Each slice of the volumetric dataset is 1024 × ners usually contain a large amount of superfluous 1024 pixels and the intensity of each pixel ranges from information that is irrelevant to immediate diagnostic or 0 to 255 (Figure 1). The total number of images in a therapeutic needs. stack ranges from 2100 to 2400. The process of creat- With the tremendous advancements in medical ima- ing generic 3D models is illustrated by describing the ging technologies such as CT, PET, MRI, and fMRI, we process of creating the 3D generic left mandible model are now able to capture images of biological structures using our method. It should be noted, however, that and their functions more clearly than ever before. Addi- the left mandible was picked solely for the purpose tionally, advanced technologies from other fields such as of illustration and our method can be used for creating computer vision, computer graphics, image processing a 3D generic model of other anatomical structures and artificial intelligence have been used to analyze 2D as well. medical images of various modalities [1]. However, due to the complexity of biological structures and their Overview of the method shape information overlaying on medical images, it is The method pipeline contains the following major steps: still an exceptionally difficult task to quickly and accu- (i) scanning subjects to obtain image stacks; (ii) creating rately create 3D generic models for a population of a individual 3D models from the stacks; (iii) cutting each study. model to generate a sub-model of the user’sinterest; Due to the difficulties with automating the segmenta- (iv) making image stacks that contain only the informa- tion task, enhanced manual segmentation software is tion pertaining to the sub-models; (v) iteratively register- still widely used. Various image processing algorithms ing the corresponding new 2D image stacks from the have been produced to minimize user interactions and previous step; (vi) averaging the newly created sub-mod- increase segmentation accuracy [14]. However, the cur- els based on intensity to produce the generic model rent enhanced manual segmentation approaches are still from all the individual sub-models. All the algorithms quitelaborious;manytimes it requires a well-trained are implemented using Java and C++ based on function- user to interact with every 2D image slice. Therefore, in alities from open source toolkits VTK (Visualization order to achieve accurate 3D reconstruction of a region, Toolkit [17]), ITK (Insight Segmentation and Registra- structure, or tissue of interest [6], it is necessary to tion Toolkit [13]) and ImageJ [18]. Both volumetric data Xiao et al. BMC Medical Imaging 2010, 10:5 Page 3 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 1 A slice of a 2D image stack obtained from a whole body scan. and surface model for the generic 3D model are created 3D skull model until the desired separation of the sub- at the final step. model is achieved. Our cutting instruments can be a plane, ball, box, or 3D model reconstruction user-defined curve. The planes, balls and boxes are all Sincethe imagingdatawehaveare mice whole-body virtual models that can be manipulated interactively by scans, the information of all the biological structures are using the computer mouse. As illustrated in Figure 3, contained in the image stacks. The sub-model of our the plane can be rotated, zoomed in and out, and trans- interest here is the left mandible. Instead of separating lated, while the arrow shows the normal of the plane. the data for the left mandible from each image, we Therefore users can decide where to set the plane to reconstruct the skull (Figure 2) of each mouse using the remove any portion that is of no interest to them. The Marching Cubes algorithm in VTK based on the pixel ball and the box can also be rotated, scaled and trans- intensity of the bone structure. lated using the computer mouse to remove the parts that are of no interest to the users. Sub-model of interest creation Users can also simulate a cutting curve by putting a Our reconstructed 3D model is a representation of the series of dots on the model through computer mouse whole mouse skull. In order to retrieve the sub-model, double clicks, as Figure 4 shows. Users can manipulate our custom-developed cutting tools are used to cut the the model by rotating, translating, or zooming in or out Xiao et al. BMC Medical Imaging 2010, 10:5 Page 4 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 2 Reconstructed 3D mouse skull model. to observe theareathattheyare interested in.The Iterative image registration order in which the dots are placed is significant as they The following registration algorithms are used. are used as the data points for interpolating a best-fit- ting curve. If the dots are put in counterclockwise order, 1. Rigid 3D image registration. In order to align the the part of the model that is above or to the left of the entire set of sub-models into the same space auto- simulated curve is removed; otherwise the part below or matically, an intensity-based rigid 3D registration to the right is removed. If a closed curve is simulated, algorithm which uses a mean square metric, a linear the portion enclosed by the closed curve is removed. interpolator, a versor rigid 3D transform and a ver- The cutting tools are implemented using functionalities sor rigid 3D transform optimizer inside ITK is used from VTK. to register the images. 2. Affine 3D image registration. Due to the variations Creating corresponding 2D image portions of the sub- of each individual sub-model, rigid 3D image regis- model tration creates local misalignments and the averaged While the users are cutting the model, all the cuts are model created based on only rigid image registration recorded and the coordinates used by the cutting tools might not be an average representative. Therefore, affine 3D image registration is also available in our such as the plane’scenterand normal,the sphere’scen- package to further align the models. An intensity- ter ad radius, the planes that composed the box, and the based affine 3D registration algorithm which uses a dots in the user-defined cutting curve are recorded into mean square metric, a linear interpolator, an affine atextfile.After thecutting processisfinished, the intensities of the pixels in the image stack are updated transform and a regular step gradient descent opti- according to the cutting information. Intensities of pix- mizer inside ITK is applied for affine registration. els that correspond to the model stay the same and the 3. Non-rigid (deformable) image registration. The glo- rest are set to 0. After this process is finished, we obtain bal affine transformation from the previous step might a new image stack that contains only the data for the create some remaining local shape variations. There- sub-model. The above steps are repeated to process all fore, in order to sharpen the blurry average images, a the mice image stacks to create the sub-models and the non-rigid image registration can also be used after step new 2D image stacks. The resulting 2D image stacks 2. An intensity-based deformable 3D registration algo- that contain only the sub-model information (see Figure rithm which uses a mean square metric, a linear inter- 5) are registered and the generic model for the sub- polator, a B-spline based transform and a LBFGS model (the left mandible) is created. The production (limited memory Broyden-Fletcher-Goldfarb-Shanno and averaging of 2D image portions are performed update) optimizer inside ITK is applied for further using functionalities in ImageJ. deformable image registration. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 5 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 3 Using various cutting tools to produce a desired sub-model (left mandible). Xiao et al. BMC Medical Imaging 2010, 10:5 Page 6 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 4 User defined cutting curve. Users can choose to remove irregular sections from the model by using a series of dots to indicate the intended cutting curve. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 7 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 5 Updated 2D image stack. Part of an updated 2D image stack showing slices 160, 170, 180, and 190, respectively (from left to right). After the cutting process, 2D image stacks are updated using the information on the cutting tools used. 2D image stacks that contain only information about the sub-model of interest are created automatically.   We use a similar iterative image registration protocol     to the one mentioned in [6] (see Figure 6 for a flow i1   I  average chart of the process). The global median of the averaged image intensities is 1. We randomly pick a subject from the female used to apply the marching cube algorithm to the aver- group as a reference and register every image stack aged image stacks [19] to extract the generic left mand- to this reference stack using 3D rigid registration. ible model that represents the average shape of all the After each registration step, the intensities of the left mandibles across all the subjects in the same images are turned into binary such that pixels with population. intensities 255 belong to the model and pixels with 0 belong to the background. Then we average corre- Results sponding pixel intensities from all the stacks to cre- Generic model building ate the averaged image stack [19]. The same We have developed a generalized virtual dissection- registration process is applied to the male group. based method for the creation of generic models from 2. Averaged models are created from the previous 2D image stacks of a group of individuals. To illustrate step by using the global median of the pixel intensi- our novel generic models creation technique, whole ties as the threshold value for binarizing the aver- bodyscans of eightfemalemiceand eightmalemice aged image stack. An affine transformation based are used to create averaged 3D models of the left mand- image registration is applied again to all the images ible. For each subject, the left mandible 3D model is that have been processed by rigid transformation created using our cutting tools and the corresponding from the previous step in the same way as described 2D image stack that contains only information of the in the previous step and new averaged image stacks left mandible is also generated. are created. 3. The previous step is repeated, but this time B- Validation of the iterative registration Spline based deformable image registration is applied Once we have 16 left mandible models, we register the to all the images that have been processed by affine image stacks for both male and female mice. Corre- transformation from the previous group. sponding pixels in the images of the female/male group 4. The previous step can be applied repeatedly to all are averaged to create an averaged image stack. Within theimagesthathavebeenprocessedbydeformable the averaged image stack, the blurry image areas result transformation from the previous group in order to from the misaligned sections. Therefore, the sharper the achieve more accurate registrations. averaged images are, the better the registration process is. We use the ratio of the number of pixels with inten- Intensity based image averaging sity 255 to the number of pixels with non-zero intensity After the iterative image registration step, all image to measure the performance of the registration process stacks of the sub-models (the left mandibles) are regis- (see Table 1). The bigger the ratio is, the better the tered. At this point, we can use intensity based image models are aligned. averaging technique as described in [19]. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 8 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 6 Iterative image registration. The reference stack is iteratively refined by performing a series of 3D registration algorithms on each stack: rigid 3D image registration, affine 3D image registration, and non-rigid deformable 3D image registration. The non-rigid registration step can be repeated to achieve more accurate registration. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 9 of 15 http://www.biomedcentral.com/1471-2342/10/5 Table 1 Comparison of image registration accuracy No. of pixels with intensity 255/No. of pixels with non-zero intensity after registration Mouse Averaged Versor based 3D rigid Affine transformation based 3D B-Spline deformable transformation based 3D Group model registration registration registration Female F2 as 0.4774 0.5732 0.6241 Group reference F3 as 0.4819 0.5761 0.6470 reference F4 as 0.4986 0.5842 0.6658 reference F5 as 0.4836 0.5723 0.6458 reference F6 as 0.4478 0.5499 0.6598 reference F7 as 0.4791 0.5570 0.6307 reference F8 as 0.4781 0.5618 0.6406 reference F9 as 0.4861 0.5988 0.6546 reference Male M2 as 0.5534 0.5954 0.6219 Group reference M3 as 0.5300 0.5904 0.6218 reference M4 as 0.5350 0.5871 0.6593 reference M5 as 0.5400 0.5939 0.6452 reference M6 as 0.5286 0.5844 0.6326 reference M7 as 0.5380 0.5899 0.6347 reference M8 as 0.5285 0.5960 0.6323 reference M9 as 0.5332 0.5912 0.6365 reference As illustrated in Figure 7, if only 3D rigid registration comparison meaningful and to avoid any potential bias. is applied, we can clearly observe misaligned areas. We used one male averaged model to register all the Once an affine transformation based registration is female group averaged models. Similarly, we registered applied, less misaligned areas can be identified. From all the male group averaged models with one female theratios thatare listed in Table1,wecan conclude averaged model. that, after several registration procedures are applied to Dice index measurement [20] is used to evaluate the the image stacks, we can create averaged image stacks similarities between averaged models starting from dif- with sharp boundaries. ferent reference subjects, after the additional registration If we choose different initial reference subjects, will procedure to facilitate direct comparison. As shown the averaged models be very different? We test the effect from Table 2, the similarity measures are from 0.97 to by choosing different subjects as the initial reference 0.98 among different averaged models. We believe that subjects to create the averaged models. We generate the rest 0.02 to 0.03 differences are due to the system multiple averaged models, each using a different initial error caused by the registration process. For the female stack as the reference stack. For example, in Table 2, mice group, the mean dice index is 0.976464, the stan- “Average F2” means d female model using female num- dard deviation is 0.001489 and the coefficient of varia- ber 2 (F2) as the reference. Afterthe average producing tion is 0.001524. For the male mice group, the mean the different averaged models, we register all of them dice index is 0.9789, the standard deviation is 0.000698 with respect to a neutral averaged model to make their and the coefficient of variation is 0.000713. Therefore, Xiao et al. BMC Medical Imaging 2010, 10:5 Page 10 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 7 Misalignments after 3D rigid registration and affine registration. Two models shown in different colors (gray and cyan) are superimposed. On the top, after 3D rigid registration, there are obvious misalignments on the front of the mandible and towards the back of the mandible. On the bottom, after the affine registration, there are fewer misaligned areas. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 11 of 15 http://www.biomedcentral.com/1471-2342/10/5 Table 2 Dice index to evaluate the similarities between two averaged models created from different initial references Average F2 Average F3 Average F4 Average F5 Average F6 Average F7 Average F8 Average F9 Female Group Average F2 1 0.9768 0.9745 0.9795 0.9767 0.9786 0.9775 0.9753 Average F3 1 0.9768 0.9776 0.9760 0.9762 0.9770 0.9757 Average F4 1 0.9747 0.9745 0.9744 0.9757 0.9742 Average F5 1 0.9770 0.9789 0.9779 0.9759 Average F6 1 0.9782 0.9774 0.9748 Average F7 1 0.9779 0.9748 Average F8 1 0.9765 Average F9 1 Average M2 Average M3 Average M4 Average M5 Average M6 Average M7 Average M8 Average M9 Male Group Average M2 1 0.9802 0.9776 0.9780 0.9784 0.9785 0.9800 0.9787 Average M3 1 0.9785 0.9792 0.9791 0.9787 0.9796 0.9796 Average M4 1 0.9794 0.9791 0.9785 0.9789 0.9781 Average M5 1 0.9790 0.9787 0.9793 0.9789 Average M6 1 0.9792 0.9796 0.9776 Average M7 1 0.9800 0.9778 Average M8 1 0.9790 Average M9 1 we can see that in this case, starting from different centroid of the population. We measure the RMSE (root reference subject will not affect the averaged models. mean square error) of voxels between every two models Brandt et al. [6] tested the honeybee brain average and between every model and the averaged model. As shape property. They used the residual non-rigid defor- shown from Table 3, the RMSE between every model mation necessary to map the subjects’ coordinate and the averaged model is smaller than the RMSE another after they have been normalized with respect to between that model and every other model. Combining position and size. They found out that the averaged our RMSE computation and the test from Brandt et al. honeybee brain model using the iterative registration [6], we believe using the iterative registration algorithm method is indeed a reasonable approximation of a shape [6] will give us a practical average model that captures Table 3 Root mean square error (RMSE) between models F2 F3 F4 F5 F6 F7 F8 F9 Averaged Model Female Group F2 0 16.93 17.82 18.21 19.08 18.75 19.52 19.70 14.99 F3 0 17.54 17.52 17.74 18.29 18.58 18.78 15.05 F4 0 18.01 19.44 17.97 19.06 18.84 16.32 F5 0 17.87 16.58 18.05 15.92 15.51 F6 0 18.20 17.83 19.32 16.10 F7 0 18.18 16.99 15.47 F8 0 18.76 16.29 F9 0 17.02 M2 M3 M4 M5 M6 M7 M8 M9 Averaged Model Male Group M2 0 16.62 17.00 17.20 17.68 16.68 16.97 16.80 13.89 M3 0 16.10 15.62 16.33 16.01 16.36 15.42 12.96 M4 0 17.59 16.40 17.17 16.39 17.36 14.56 M5 0 18.34 17.08 16.35 15.78 14.50 M6 0 16.45 17.73 17.56 14.85 M7 0 17.37 15.95 13.41 M8 0 17.11 14.21 M9 0 13.71 Xiao et al. BMC Medical Imaging 2010, 10:5 Page 12 of 15 http://www.biomedcentral.com/1471-2342/10/5 the spatial information of the population. Our method is constructed. The brain structures of the honeybee, such very flexible and easy to use such that anyone can use as neuropils and neurons, were manually segmented and imagestackstocreatemodelsand retrieveasub-region labeled. Even with sophisticated algorithms [13] to help from it at their ease. The image registration time varies users to trace regions slice-by-slice quickly and accu- depending on the image size and the desired accuracy rately, manually processing thousands of images is still of the registration. very labor intensive. Therefore, we focused on proces- sing more slices with fewer human-computer interac- Binarization problem tions. Using a plane to separate a 3D polygon mesh has Many studies considered complicated organs such as been used to refine a model created from CT or MRI brain [4,9,10,21,22]. Inside the brain, different sub image stacks [14]. Our approach can use not only a regionsneedto beconsideredduringthe registration plane but also a box, a sphere, or even a user-defined process. Therefore, if one uniform intensity value is curve to cut 3D models. More cutting algorithms can be used to represent the organ, homogenous tissue map- added as well to quickly remove the portion that is of ping might not be available. However, in our study we no interest to the users. Hence, with the cutting infor- would like to consider the organs with homogeneous mation, corresponding 2D image stacks can be updated intensities and structures. Therefore, we can use only automatically. Our approach can be used to create the one intensity value to represent the model and use it for desired models very quickly and automatically register registration and model averaging. This would reduce the images. Therefore, our method significantly shortens the registration time and increase the registration accuracy. generic model building time. We used a Windows PC with dual CPUs to create all Discussion the left mandible models. The machine has two 2 GHz Flexible module-based implementation CPU with 2 GB memory. In order to retrieve one left Ourmethodis composedoffivemodules: 3D model mandible model, we need to process an image stack of reconstruction, sub-model of interest creation, produc- size 1024 × 1024 × 500. The current machine setup can- tion of 2D image stacks corresponding to the sub-mod- not process this image stack at one time; therefore, we els, image registration, and generic 3D model creation. process the image stack in three consecutive parts. On Each module in this framework has various algorithms the average we use 16.28 minutes and 14.75 cuts to that can be applied according to the requirements of a retrieveacompleteleftmandible for thefemalemice specific scientific study. group, and 16.2 minutes and 19.25 cuts for the male For 3D model reconstruction from 2D image stacks, mice group (see Table 4). These times include both the the marching cubes algorithm is the most popular one. waiting for the rendering time and the cutting manipu- Moreover, other reconstruction algorithms have been lation time. On the average, it takes 3.31 minutes to developed to improve the quality of the contour geome- render the female mouse model and 3.98 minutes to try [23,24]. Therefore, depending on the application render the male mouse model initially. requirements, different reconstruction algorithms can be used in our method to create polygonal models. Our Image registration cutting tools can be used to process polygonal models Since image registration is an essential step towards created from any reconstruction algorithm. creating generic models, numerous techniques have been developed to register corresponding 2D image Efficiency of the cutting approach stacks or 3D models. For some applications, averaged In order to automatically or semi-automatically create models created from the rigid registration step satisfy generic 3D models, different approaches have been pro- the requirements. For example, in [19], an intensity- posed. However, those generic model building tools based rigid image registration algorithm is applied to either need perfect individual models [5] or require create a generalized shape image (GSI) which represents costly human-computer interactions to retrieve 3D average values of the corresponding pixel intensities models. In [6], a brain atlas of the honeybee was across all the image stacks. Even though this method Table 4 Processing time for model making Female mice Male mice Stack size Image size: 1024 × 1024 Image size: 1024 × 1024 Number of images: 500 Number of images: 500 Average time to create a sub-model from a stack 16.28 minutes 16.2 minutes Average number of cuts performed 14.75 cuts 19.25 cuts Xiao et al. BMC Medical Imaging 2010, 10:5 Page 13 of 15 http://www.biomedcentral.com/1471-2342/10/5 yields some shape variations and not well-registered applied during the registration step are also available for images create local differences from averaged images by visualizing shape changes and numerical morphometri- using the gold standard (e.g. landmark based Procrustes cal analysis such as global and local shape comparisons, average) it still can be used as a screening tool for the strain tensor analysis, and modes of variations analysis initial shape analysis. In [6] iterative averaging is used to [3,6,25]. The transformations are all available through register all the original images to the same reference to ITK [13]. create an average, and then iteratively re-register the Versor based 3D rigid transformation has six para- original images to the new average. Affine and non-rigid meters that represent a 3D rotation and a 3D transla- image registrations are applied in the honeybee brain tion. The rotation is specified by a versor quaternion atlas creation. A subsequent affine registration step and the translation is represented by a vector. The first removes more misaligned shape differences than apply- three parameters define the versor and the last three ing only the rigid registration and creates a sharper parameters represent the translation in each dimen- averaged image, but relative shape differences might still sion. Those parameters are available for further image remain. Nevertheless, compared with automatic deform- analysis. A versor is defined as the quotient between able registration, affine registration requires fewer para- two non-parallel vectors of equal length. Versors meters and the computation time is relatively short. represent an orientation change in a vector, and Therefore, depending on the requirements of the appli- they are a natural representation for rotations in 3D cation, deformable registration can be used repeatedly space [13]. to further remove the misalignments and create still XV  *(X C)C sharper averaged images. If the user wants to create an averaged surface model In theaboveequation, V is a versor. X is a point in that is more like the gold standard Procrustes averaged the 3D space. C is a vector that represents the rigid model, a method for jointly registering and averaging transformation center. The application of the versor 3D surface models, such as the one described in [5], can onto the vector (X-C) is different from the regular vec- be used. Anatomical structures are modeled using a tor product. However, in ITK, we can convert the versor quadrangular mesh. The contour in each image slice is product into the Euclidean matrix format. The 3D rota- detected and then re-sampled using the same number of tion matrix and the translation vector can be calculated points. Then a permutation of points on each contour is from the versor product and can be saved for further performed to guarantee that every point in each model analysis. corresponds to the same anatomical region of the point 3D affine transformation can be represented as: with the same index in all other models. The points are finally averaged to create the generic model. The points XA() X C(TC) are indexed on two integer coordinates, one of which represents the ordering of the initial image stacks. How- where X is a vector and represents a point in the 3D ever, in order to use this approach, we have to pay space, A is a 3 × 3 matrix and represents the affine attention to the alignment in the direction of slice transformation matrix, C is a vector and represents the ordering, since that method assumes that the anatomical transformation center, and T is a vector and represents structures along this direction are aligned automatically the 3D translation. X’ is the new position for X after the by the scanner. Therefore, rigid, affine or deformable affine transformation. The affine registration from ITK registrations should still be used first to ensure that the that we utilized consists of rotation, scaling, shearing anatomical structures along the direction are aligned. and translation in the 3D dimension. There are (3+1) × Subsequently, the multiple 3D anatomical surface mod- 3 parameters in this transformation. The first 3 × 3 els averaging algorithm [5] can be used to create an parameters define A, the last 3 parameters define the averaged surface model. Our package does not provide translations for each dimension. The center of the trans- the quadrangular mesh building algorithm as described formation is automatically calculated from the programs in [5], however our registration programs can still be and is also available. used to align the anatomical structures along the slice B-Spline based non-rigid transformation [3,6,9,13] will ordering direction. generate a dense deformation field where a deformation vector is assigned to every point in the 3D space. The Information on shape variation deformation field is available and can be saved in the The rigid, affine and non-rigid registration algorithms form of a vector image from ITK. The deformation vec- that we employ allow us to align all the subjects vir- tor can be used to further analyze the local shape tually and create the averaged models. Besides having variations. the final averaged 3D models, all the transformations Xiao et al. BMC Medical Imaging 2010, 10:5 Page 14 of 15 http://www.biomedcentral.com/1471-2342/10/5 Applicability of the method objects need to be frequently rotated to perceive their Using our cutting tools to build models from 2D image 3D structures. stacks allows beginners in medical fields to learn anat- omy intuitively and enjoy the process of separating the Conclusions biological structures from the virtual body model before We have developed a new technique that uses virtual dealing with real subjects. Quickly and accurately creat- model cutting and iterative image registration to create ing various 3D averaged models can satisfy the require- generic models from 2D image stacks of a group of indi- ments for a large number of models in virtual crash viduals. Our system allows biologists to build generic 3D testing, therapy planning, and customizing replacement models quickly and accurately. However, particularly body parts. Large scale morphological studies that complicated morphological structures, such as highly require quantification of anatomical features can be branched and convoluted designs that typify vascular or really tedious and might be very detailed and only nervous networks, still pose a challenge to our general- focused on a few important measurements. Our method ized and enhanced method toward generic model crea- facilitates morphological studies by allowing anatomical tion. It is difficult to use the current manual virtual structures to be measured and compared rapidly and in dissection tools to remove such sub-models from initial, more detail. These tools help put morphological analysis unprocessed scans. More convenient and intuitive man- at a similar level to other studies such as genetic and ual virtual dissection methods will be developed in our molecular studies where a large amount of data and future research. Producing deformable models based on measurements can be dealt with relatively quickly. the current tools will also be an area of further develop- The issue of homology, which refers to biological ment. Those deformable averaged models can then be structures that have the same function, is also addressed used for automatically segmenting the anatomical struc- through our method. If we measure an average and do tures. More advanced automated segmentation algo- quantitative comparisons, we would want to compare rithms that utilize generic models will be studied to the same anatomical region. This requires the two mod- enable higher throughput analyses of anatomical struc- els being compared to be first registered correctly with tures in both medical and more general biological con- each other, such that if one area of interest is picked in texts. Quantification of 3D shape variations will also be one model, it refers to the same region in the other studied based on our generic model building technique. model. The iterative registration employed in our approach can, to a large extent, reduce the misalign- Software availability and requirements ments. The method we developed optimizes the func- The implementation of our method is available for free tionalities and technologies of existing toolkits and the downloading at http://www.visualgenomics.ca/~mxiao/ resulting software package allows biologists to build research.html. The current version of the software has their generic models more quickly and accurately. been tested on Unix Solaris 10 and Windows XP with . As our virtual dissection tools are implemented in NET Framework 3.5. In order to run the program from Java, they can run on both regular display systems and our jar files, at least Java 1.6 need to be installed. ImageJ on the state-of-the-art CAVE Automated Virtual Envir- as well as shared (dynamically linked) libraries of VTK onment [26], which is a 3D stereo-based 4-wall display and ITK should also be installed. system installed at the University of Calgary to provide Detailed installation and the user’sguide arealso users with a virtual immersive environment. One of the available on the project website. VTK, ITK and ImageJ advantages of using this virtual reality system as a plat- are all open source and freely available software toolkits. form for our cutting tools is that users can treat both real world objects and virtual world objects quite the Acknowledgements same way, which is not possible in a desktop computing This work has been supported by Genome Canada through Genome environment or even in a single-wall stereo display Alberta; Alberta Science and Research Authority; Western Economic Diversification; the Governments of Canada and of Alberta through the environment. For example, users can move around in Western Economic Partnership Agreement; the iCORE/Sun Microsystems the display environment and view virtual objects from Industrial Research Chair program; the Alberta Network for Proteomics the “inside” such that the details operations can be easily Innovation; and the Canada Foundation for Innovation. We thank Dr. Fred Bookstein for his comments on this project. We thank Wei Liu for scanning understood. By harnessing the power of the CAVE and the mice. We thank Megan Smith for her comments and her help with our cutting tools, users will have more flexibility includ- editing the paper. We also thank the reviewers for their comments. ing a wide variety of viewing perspectives and a high Author details degree of freedom to set locations and orientations of Sun Center of Excellence for Visual Genomics, Department of Biochemistry the cutting tools. This is a definite advantage over and Molecular Biology, Faculty of Medicine, University of Calgary, 3330 ordinary desktop computing environments where the Hospital Drive NW, Calgary, T2N 4N1, Canada. Morphometrics Laboratory, Xiao et al. BMC Medical Imaging 2010, 10:5 Page 15 of 15 http://www.biomedcentral.com/1471-2342/10/5 Department of Cell Biology and Anatomy, Faculty of Medicine, University of 19. Kristensen E, Parsons TE, Hallgrímsson B, Boyd SK: A novel 3D image-based Calgary, 3330 Hospital Drive NW, Calgary, T2N 4N1, Canada. morphological method for phenotypic analysis. IEEE Transactions on Biomedical Engineering 2008, 12:2826-2831. Authors’ contributions 20. Dice LR: Measures of the amount of ecologic association between MX, JS, OEMP, EJS, BH and CWS participated in writing the manuscript and species. Ecology 1945, 26:297-302. designing the technique. MX developed the computing framework. BH and 21. Guimond A, Meunier J, Thirion PJ: Average brain models: a convergence CWS directed the research. All authors read and approved the final version study. Computer Vision and Image Understanding 2000, 2:192-210. of the manuscript. 22. Guimond A, Meunier J, Thirion JP: Automatic computation of average brain models. Lecture Notes in Computer Science: Medical Image Computing Competing interests and Computer-Assisted Intervention 1998-MICCAI’98 Berlin Heidelberg: The authors declare that they have no competing interests. Springer 1998, 1496:631-640. 23. Schaefer S, Warren J: Dual marching cubes: primal contouring of dual Received: 5 August 2009 grids. Proceedings of the 12th Pacific Conference on Computer Graphics and Accepted: 8 February 2010 Published: 8 February 2010 Applications: October 2004 Seoul, Korea 2004, 70-76. 24. Schaefer S, Ju T, Warren J: Manifold dual Contouring. IEEE Transactions on Visualization and Computer Graphics 2007, 3:610-619. References 25. Bayly PV, Black EE, Pedersen RC, Leister EP, Genin GM: In vivo imaging of 1. Thompson PM, Mega MS, Narr KL, Sowell ER, Blanton RE, Toga AW: Brain rapid deformation and strain in an animal model of traumatic brain image analysis and atlas construction. Handbook of Medical Imaging: injury. Journal of Biomechanics 2006, 6:1086-1095. Medical Image Processing and Analysis SPIE PressSonka M, Fitzpatrick JM 26. Sensen CW: Using CAVE® technology for functional genomics studies. 2000, 2:1063-1119. Diabetes Technology & Therapeutics 2002, 4:867-871. 2. Small CG: The Statistical Theory of Shape New York: Springer 1996. 3. Olafsdottir H, Darvann TA, Hermann NV, Oubel E, Ersboll BK, Fangi AF, Pre-publication history Larsen P, Perlyn CA, Morriss-Key GM, Kreiborg S: Computational mouse The pre-publication history for this paper can be accessed here:http://www. atlases and their application to automatic assessment of craniofacial biomedcentral.com/1471-2342/10/5/prepub dysmorphology caused by the crouzon mutation Fgfr2C342Y. Journal of Anatomy 2007, 211:37-52. doi:10.1186/1471-2342-10-5 4. Barratt DC, Chan CSK, Edwards PJ, Penney GP, Slomczykowski M, Carer TJ, Cite this article as: Xiao et al.: Building generic anatomical models using Hawkes DJ: Instantiation and registration of statistical shape models of virtual model cutting and iterative registration. BMC Medical Imaging the femur and pelvis using 3D ultrasound imaging. Medical Image 2010 10:5. Analysis 2008, 12:258-374. 5. Maschino E, Maurin Y, Andrey P: Joint registration and averaging of multiple 3D anatomical surface models. Computer Vision and Image Understanding 2006, 1:16-30. 6. Brandt R, Rohlfing T, Rybak J, Krofczik S, Maye A, Westerhoff M, Hege HC, Menzel R: Three-dimensional average-shape atlas of the honeybee brain and its applications. The Journal of Comparative Neurology 2005, 492:1-19. 7. Avants B, Gee JC: Shape averaging with differmorphic flows for atlas creation. Proceedings of the IEEE International Symposium on Biomedical Imaging, 1: April 2004 Arlington, VA 2004, 595-598. 8. Argall BD, Saad ZS, Beauchamp MS: Simplified intersubject averaging on the cortical surface using SUMA. Human Brain Mapping 2006, 27:14-27. 9. Ruckert D, Frangi AF, Schnabel JA: Automatic construction of 3D statistical deformation models using non-rigid registration. Lecture Notes in Computer Science: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2001 Berlin Heidelberg: SpringerNiessen WJ, Viergever MA 2001, 2208:77-84. 10. Rajamani KT, Styner MA, Talib H, Zheng G, Nolte LP, Ballester MAG: Statistical deformable bone models for robust 3D surface extrapolation from sparse data. Medical Image Analysis 2007, 11:99-109. 11. Schmutz B, Reynolds KJ, Slavotinek JP: Development and validation of a generic 3D model of the distal femur. Computer Methods in Biomechanics and Biomedical Engineering 2006, 5:305-312. 12. Zachow S, Zilske M, Hege HC: 3D reconstruction of individual anatomy from medical image data: segmentation and geometry processing. Proceedings of the CADFEM Users Meeting Dresden, Germany 2007. 13. Yoo T, Ed: Insight into Images AK Peters 2004. 14. Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G: User- guided 3D active contour segmentation of anatomical structures: Submit your next manuscript to BioMed Central Significantly improved efficiency and reliability. Neuroimage 2006, 3:1116-1128. and take full advantage of: 15. Chen T, Metaxas D: A hybrid framework for 3D medical image segmentation. Medical Image Analysis 2005, 6:547-565. • Convenient online submission 16. Xiao M, Soh J, Meruvia-Pastor O, Osborn D, Lam N, Hallgrímsson B, • Thorough peer review Sensen CW: An efficient virtual dissection tool to create generic models for anatomical atlases. Studies in Health Technology and Informatics 2009, • No space constraints or color figure charges 142:426-428. • Immediate publication on acceptance 17. Schroeder W, Martin K, Lorensen B: The Visualization Toolkit Prentice-Hall • Inclusion in PubMed, CAS, Scopus and Google Scholar 18. Rasband WS: ImageJ. U. S. National Institutes of Health, Bethesda, Maryland, • Research which is freely available for redistribution USA 1997http://rsb.info.nih.gov/ij/. Submit your manuscript at www.biomedcentral.com/submit http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png BMC Medical Imaging Springer Journals

Building generic anatomical models using virtual model cutting and iterative registration

Loading next page...
 
/lp/springer-journals/building-generic-anatomical-models-using-virtual-model-cutting-and-KrHnfq0bbF

References (59)

Publisher
Springer Journals
Copyright
Copyright © 2010 by Xiao et al; licensee BioMed Central Ltd.
Subject
Medicine & Public Health; Imaging / Radiology
eISSN
1471-2342
DOI
10.1186/1471-2342-10-5
pmid
20144190
Publisher site
See Article on Publisher Site

Abstract

Background: Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods: The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub- volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results: After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions: Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately. Background must be a single averaged model representing all indivi- Spatial information of biological structures has been dual 3D models in the same population of a study used to analyze their functions and to relate their shape [5,11]. An averaged 3D model is a commonly used form changes to various genetic parameters [1-4]. In particu- of a generic 3D model. The creation of an averaged lar, using 3D generic models to statistically analyze model captures information that can be exploited in sta- trends in biological structure changes is an important tistical analysis of real populations. By comparing aver- tool in morphometrics research [1,2,4-10]. In order to aged models and dispersion around them, anatomical be suitable for statistical analysis, a generic 3D model differences can be quantified across groups that differ in some underlying causal or exploratory factors, such as genetics, gender, and drug treatment [3]. The compari- * Correspondence: mxiao@ucalgary.ca Sun Center of Excellence for Visual Genomics, Department of Biochemistry sons can be made between ‘static’ morphological states, and Molecular Biology, Faculty of Medicine, University of Calgary, 3330 where the subjects for comparison are at the same Hospital Drive NW, Calgary, T2N 4N1, Canada © 2010 Xiao et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 2 of 15 http://www.biomedcentral.com/1471-2342/10/5 developmental state or they can be between ‘dynamic’ entail specifically tailored solutions that combine and states, where comparisons are made between various integrate different 3D segmentation algorithms [15] that stages of the subject’s growth. Therefore, a technique may still necessitate manual segmentation on each 2D for creating high throughput 3D generic models is image slice. To redress such persistent drawbacks, we needed to collect and manage large numbers of subjects have developed a generalized virtual dissection-based quickly and efficiently. Such a technique will enable method for creating generic models. In comparison to researchers to discover a wide range of traits to their our previous virtual dissection technique [16], the interest in both natural and clinical settings. Generic 3D method now allows user-define curves for indicating models can also be used in automatic segmentation [1], cutting surfaces and employs enhanced iterative registra- medical education, virtual crash testing, therapy plan- tion to better handle shape variations. In addition, the ning and customizing replacement body parts [11,12]. resulting software is now publicly available. We show Hence, in medical and biological studies, 3D generic that the creation of an averaged model that captures models built for a range of populations are in high spatial information exploitable in statistical analyses of demand. organ shape is facilitated by coupling our generalized In order to create valid 3D generic models from 2D segmentation method with existing automatic image image stacks, more attention should be paid to two registration algorithms [13]. essential steps - image segmentation and image registra- tion. Image registration is the process to find a 3D Methods transformation that can map the same anatomical Materials region from one subject into another one. This process 2D image stacks of mice whole-body micro-computed is essential in clinical and research applications because tomography (μ-CT) scanswereprovidedbythe Mor- researchers often need to compare the same anatomical phometrics Laboratory at theUniversityofCalgary. region scanned using different modalities or at different Eight male and eight female laboratory mice from the time points [13]. Image segmentation is needed when same strain (AWS) were scanned. The female mice we try to retrieve the spatial information of certain bio- were 54 to 61 days old and weighed 16 to 21 grams; logical structures after applying in vivo imaging technol- the male mice were 61 days old and weighed 20 to 25 ogiessuch asMRI.Thisstepisgenerally indispensable grams. All individuals were scanned at a resolution of because 3D image stacks generated from in-vivo scan- 35 μm. Each slice of the volumetric dataset is 1024 × ners usually contain a large amount of superfluous 1024 pixels and the intensity of each pixel ranges from information that is irrelevant to immediate diagnostic or 0 to 255 (Figure 1). The total number of images in a therapeutic needs. stack ranges from 2100 to 2400. The process of creat- With the tremendous advancements in medical ima- ing generic 3D models is illustrated by describing the ging technologies such as CT, PET, MRI, and fMRI, we process of creating the 3D generic left mandible model are now able to capture images of biological structures using our method. It should be noted, however, that and their functions more clearly than ever before. Addi- the left mandible was picked solely for the purpose tionally, advanced technologies from other fields such as of illustration and our method can be used for creating computer vision, computer graphics, image processing a 3D generic model of other anatomical structures and artificial intelligence have been used to analyze 2D as well. medical images of various modalities [1]. However, due to the complexity of biological structures and their Overview of the method shape information overlaying on medical images, it is The method pipeline contains the following major steps: still an exceptionally difficult task to quickly and accu- (i) scanning subjects to obtain image stacks; (ii) creating rately create 3D generic models for a population of a individual 3D models from the stacks; (iii) cutting each study. model to generate a sub-model of the user’sinterest; Due to the difficulties with automating the segmenta- (iv) making image stacks that contain only the informa- tion task, enhanced manual segmentation software is tion pertaining to the sub-models; (v) iteratively register- still widely used. Various image processing algorithms ing the corresponding new 2D image stacks from the have been produced to minimize user interactions and previous step; (vi) averaging the newly created sub-mod- increase segmentation accuracy [14]. However, the cur- els based on intensity to produce the generic model rent enhanced manual segmentation approaches are still from all the individual sub-models. All the algorithms quitelaborious;manytimes it requires a well-trained are implemented using Java and C++ based on function- user to interact with every 2D image slice. Therefore, in alities from open source toolkits VTK (Visualization order to achieve accurate 3D reconstruction of a region, Toolkit [17]), ITK (Insight Segmentation and Registra- structure, or tissue of interest [6], it is necessary to tion Toolkit [13]) and ImageJ [18]. Both volumetric data Xiao et al. BMC Medical Imaging 2010, 10:5 Page 3 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 1 A slice of a 2D image stack obtained from a whole body scan. and surface model for the generic 3D model are created 3D skull model until the desired separation of the sub- at the final step. model is achieved. Our cutting instruments can be a plane, ball, box, or 3D model reconstruction user-defined curve. The planes, balls and boxes are all Sincethe imagingdatawehaveare mice whole-body virtual models that can be manipulated interactively by scans, the information of all the biological structures are using the computer mouse. As illustrated in Figure 3, contained in the image stacks. The sub-model of our the plane can be rotated, zoomed in and out, and trans- interest here is the left mandible. Instead of separating lated, while the arrow shows the normal of the plane. the data for the left mandible from each image, we Therefore users can decide where to set the plane to reconstruct the skull (Figure 2) of each mouse using the remove any portion that is of no interest to them. The Marching Cubes algorithm in VTK based on the pixel ball and the box can also be rotated, scaled and trans- intensity of the bone structure. lated using the computer mouse to remove the parts that are of no interest to the users. Sub-model of interest creation Users can also simulate a cutting curve by putting a Our reconstructed 3D model is a representation of the series of dots on the model through computer mouse whole mouse skull. In order to retrieve the sub-model, double clicks, as Figure 4 shows. Users can manipulate our custom-developed cutting tools are used to cut the the model by rotating, translating, or zooming in or out Xiao et al. BMC Medical Imaging 2010, 10:5 Page 4 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 2 Reconstructed 3D mouse skull model. to observe theareathattheyare interested in.The Iterative image registration order in which the dots are placed is significant as they The following registration algorithms are used. are used as the data points for interpolating a best-fit- ting curve. If the dots are put in counterclockwise order, 1. Rigid 3D image registration. In order to align the the part of the model that is above or to the left of the entire set of sub-models into the same space auto- simulated curve is removed; otherwise the part below or matically, an intensity-based rigid 3D registration to the right is removed. If a closed curve is simulated, algorithm which uses a mean square metric, a linear the portion enclosed by the closed curve is removed. interpolator, a versor rigid 3D transform and a ver- The cutting tools are implemented using functionalities sor rigid 3D transform optimizer inside ITK is used from VTK. to register the images. 2. Affine 3D image registration. Due to the variations Creating corresponding 2D image portions of the sub- of each individual sub-model, rigid 3D image regis- model tration creates local misalignments and the averaged While the users are cutting the model, all the cuts are model created based on only rigid image registration recorded and the coordinates used by the cutting tools might not be an average representative. Therefore, affine 3D image registration is also available in our such as the plane’scenterand normal,the sphere’scen- package to further align the models. An intensity- ter ad radius, the planes that composed the box, and the based affine 3D registration algorithm which uses a dots in the user-defined cutting curve are recorded into mean square metric, a linear interpolator, an affine atextfile.After thecutting processisfinished, the intensities of the pixels in the image stack are updated transform and a regular step gradient descent opti- according to the cutting information. Intensities of pix- mizer inside ITK is applied for affine registration. els that correspond to the model stay the same and the 3. Non-rigid (deformable) image registration. The glo- rest are set to 0. After this process is finished, we obtain bal affine transformation from the previous step might a new image stack that contains only the data for the create some remaining local shape variations. There- sub-model. The above steps are repeated to process all fore, in order to sharpen the blurry average images, a the mice image stacks to create the sub-models and the non-rigid image registration can also be used after step new 2D image stacks. The resulting 2D image stacks 2. An intensity-based deformable 3D registration algo- that contain only the sub-model information (see Figure rithm which uses a mean square metric, a linear inter- 5) are registered and the generic model for the sub- polator, a B-spline based transform and a LBFGS model (the left mandible) is created. The production (limited memory Broyden-Fletcher-Goldfarb-Shanno and averaging of 2D image portions are performed update) optimizer inside ITK is applied for further using functionalities in ImageJ. deformable image registration. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 5 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 3 Using various cutting tools to produce a desired sub-model (left mandible). Xiao et al. BMC Medical Imaging 2010, 10:5 Page 6 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 4 User defined cutting curve. Users can choose to remove irregular sections from the model by using a series of dots to indicate the intended cutting curve. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 7 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 5 Updated 2D image stack. Part of an updated 2D image stack showing slices 160, 170, 180, and 190, respectively (from left to right). After the cutting process, 2D image stacks are updated using the information on the cutting tools used. 2D image stacks that contain only information about the sub-model of interest are created automatically.   We use a similar iterative image registration protocol     to the one mentioned in [6] (see Figure 6 for a flow i1   I  average chart of the process). The global median of the averaged image intensities is 1. We randomly pick a subject from the female used to apply the marching cube algorithm to the aver- group as a reference and register every image stack aged image stacks [19] to extract the generic left mand- to this reference stack using 3D rigid registration. ible model that represents the average shape of all the After each registration step, the intensities of the left mandibles across all the subjects in the same images are turned into binary such that pixels with population. intensities 255 belong to the model and pixels with 0 belong to the background. Then we average corre- Results sponding pixel intensities from all the stacks to cre- Generic model building ate the averaged image stack [19]. The same We have developed a generalized virtual dissection- registration process is applied to the male group. based method for the creation of generic models from 2. Averaged models are created from the previous 2D image stacks of a group of individuals. To illustrate step by using the global median of the pixel intensi- our novel generic models creation technique, whole ties as the threshold value for binarizing the aver- bodyscans of eightfemalemiceand eightmalemice aged image stack. An affine transformation based are used to create averaged 3D models of the left mand- image registration is applied again to all the images ible. For each subject, the left mandible 3D model is that have been processed by rigid transformation created using our cutting tools and the corresponding from the previous step in the same way as described 2D image stack that contains only information of the in the previous step and new averaged image stacks left mandible is also generated. are created. 3. The previous step is repeated, but this time B- Validation of the iterative registration Spline based deformable image registration is applied Once we have 16 left mandible models, we register the to all the images that have been processed by affine image stacks for both male and female mice. Corre- transformation from the previous group. sponding pixels in the images of the female/male group 4. The previous step can be applied repeatedly to all are averaged to create an averaged image stack. Within theimagesthathavebeenprocessedbydeformable the averaged image stack, the blurry image areas result transformation from the previous group in order to from the misaligned sections. Therefore, the sharper the achieve more accurate registrations. averaged images are, the better the registration process is. We use the ratio of the number of pixels with inten- Intensity based image averaging sity 255 to the number of pixels with non-zero intensity After the iterative image registration step, all image to measure the performance of the registration process stacks of the sub-models (the left mandibles) are regis- (see Table 1). The bigger the ratio is, the better the tered. At this point, we can use intensity based image models are aligned. averaging technique as described in [19]. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 8 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 6 Iterative image registration. The reference stack is iteratively refined by performing a series of 3D registration algorithms on each stack: rigid 3D image registration, affine 3D image registration, and non-rigid deformable 3D image registration. The non-rigid registration step can be repeated to achieve more accurate registration. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 9 of 15 http://www.biomedcentral.com/1471-2342/10/5 Table 1 Comparison of image registration accuracy No. of pixels with intensity 255/No. of pixels with non-zero intensity after registration Mouse Averaged Versor based 3D rigid Affine transformation based 3D B-Spline deformable transformation based 3D Group model registration registration registration Female F2 as 0.4774 0.5732 0.6241 Group reference F3 as 0.4819 0.5761 0.6470 reference F4 as 0.4986 0.5842 0.6658 reference F5 as 0.4836 0.5723 0.6458 reference F6 as 0.4478 0.5499 0.6598 reference F7 as 0.4791 0.5570 0.6307 reference F8 as 0.4781 0.5618 0.6406 reference F9 as 0.4861 0.5988 0.6546 reference Male M2 as 0.5534 0.5954 0.6219 Group reference M3 as 0.5300 0.5904 0.6218 reference M4 as 0.5350 0.5871 0.6593 reference M5 as 0.5400 0.5939 0.6452 reference M6 as 0.5286 0.5844 0.6326 reference M7 as 0.5380 0.5899 0.6347 reference M8 as 0.5285 0.5960 0.6323 reference M9 as 0.5332 0.5912 0.6365 reference As illustrated in Figure 7, if only 3D rigid registration comparison meaningful and to avoid any potential bias. is applied, we can clearly observe misaligned areas. We used one male averaged model to register all the Once an affine transformation based registration is female group averaged models. Similarly, we registered applied, less misaligned areas can be identified. From all the male group averaged models with one female theratios thatare listed in Table1,wecan conclude averaged model. that, after several registration procedures are applied to Dice index measurement [20] is used to evaluate the the image stacks, we can create averaged image stacks similarities between averaged models starting from dif- with sharp boundaries. ferent reference subjects, after the additional registration If we choose different initial reference subjects, will procedure to facilitate direct comparison. As shown the averaged models be very different? We test the effect from Table 2, the similarity measures are from 0.97 to by choosing different subjects as the initial reference 0.98 among different averaged models. We believe that subjects to create the averaged models. We generate the rest 0.02 to 0.03 differences are due to the system multiple averaged models, each using a different initial error caused by the registration process. For the female stack as the reference stack. For example, in Table 2, mice group, the mean dice index is 0.976464, the stan- “Average F2” means d female model using female num- dard deviation is 0.001489 and the coefficient of varia- ber 2 (F2) as the reference. Afterthe average producing tion is 0.001524. For the male mice group, the mean the different averaged models, we register all of them dice index is 0.9789, the standard deviation is 0.000698 with respect to a neutral averaged model to make their and the coefficient of variation is 0.000713. Therefore, Xiao et al. BMC Medical Imaging 2010, 10:5 Page 10 of 15 http://www.biomedcentral.com/1471-2342/10/5 Figure 7 Misalignments after 3D rigid registration and affine registration. Two models shown in different colors (gray and cyan) are superimposed. On the top, after 3D rigid registration, there are obvious misalignments on the front of the mandible and towards the back of the mandible. On the bottom, after the affine registration, there are fewer misaligned areas. Xiao et al. BMC Medical Imaging 2010, 10:5 Page 11 of 15 http://www.biomedcentral.com/1471-2342/10/5 Table 2 Dice index to evaluate the similarities between two averaged models created from different initial references Average F2 Average F3 Average F4 Average F5 Average F6 Average F7 Average F8 Average F9 Female Group Average F2 1 0.9768 0.9745 0.9795 0.9767 0.9786 0.9775 0.9753 Average F3 1 0.9768 0.9776 0.9760 0.9762 0.9770 0.9757 Average F4 1 0.9747 0.9745 0.9744 0.9757 0.9742 Average F5 1 0.9770 0.9789 0.9779 0.9759 Average F6 1 0.9782 0.9774 0.9748 Average F7 1 0.9779 0.9748 Average F8 1 0.9765 Average F9 1 Average M2 Average M3 Average M4 Average M5 Average M6 Average M7 Average M8 Average M9 Male Group Average M2 1 0.9802 0.9776 0.9780 0.9784 0.9785 0.9800 0.9787 Average M3 1 0.9785 0.9792 0.9791 0.9787 0.9796 0.9796 Average M4 1 0.9794 0.9791 0.9785 0.9789 0.9781 Average M5 1 0.9790 0.9787 0.9793 0.9789 Average M6 1 0.9792 0.9796 0.9776 Average M7 1 0.9800 0.9778 Average M8 1 0.9790 Average M9 1 we can see that in this case, starting from different centroid of the population. We measure the RMSE (root reference subject will not affect the averaged models. mean square error) of voxels between every two models Brandt et al. [6] tested the honeybee brain average and between every model and the averaged model. As shape property. They used the residual non-rigid defor- shown from Table 3, the RMSE between every model mation necessary to map the subjects’ coordinate and the averaged model is smaller than the RMSE another after they have been normalized with respect to between that model and every other model. Combining position and size. They found out that the averaged our RMSE computation and the test from Brandt et al. honeybee brain model using the iterative registration [6], we believe using the iterative registration algorithm method is indeed a reasonable approximation of a shape [6] will give us a practical average model that captures Table 3 Root mean square error (RMSE) between models F2 F3 F4 F5 F6 F7 F8 F9 Averaged Model Female Group F2 0 16.93 17.82 18.21 19.08 18.75 19.52 19.70 14.99 F3 0 17.54 17.52 17.74 18.29 18.58 18.78 15.05 F4 0 18.01 19.44 17.97 19.06 18.84 16.32 F5 0 17.87 16.58 18.05 15.92 15.51 F6 0 18.20 17.83 19.32 16.10 F7 0 18.18 16.99 15.47 F8 0 18.76 16.29 F9 0 17.02 M2 M3 M4 M5 M6 M7 M8 M9 Averaged Model Male Group M2 0 16.62 17.00 17.20 17.68 16.68 16.97 16.80 13.89 M3 0 16.10 15.62 16.33 16.01 16.36 15.42 12.96 M4 0 17.59 16.40 17.17 16.39 17.36 14.56 M5 0 18.34 17.08 16.35 15.78 14.50 M6 0 16.45 17.73 17.56 14.85 M7 0 17.37 15.95 13.41 M8 0 17.11 14.21 M9 0 13.71 Xiao et al. BMC Medical Imaging 2010, 10:5 Page 12 of 15 http://www.biomedcentral.com/1471-2342/10/5 the spatial information of the population. Our method is constructed. The brain structures of the honeybee, such very flexible and easy to use such that anyone can use as neuropils and neurons, were manually segmented and imagestackstocreatemodelsand retrieveasub-region labeled. Even with sophisticated algorithms [13] to help from it at their ease. The image registration time varies users to trace regions slice-by-slice quickly and accu- depending on the image size and the desired accuracy rately, manually processing thousands of images is still of the registration. very labor intensive. Therefore, we focused on proces- sing more slices with fewer human-computer interac- Binarization problem tions. Using a plane to separate a 3D polygon mesh has Many studies considered complicated organs such as been used to refine a model created from CT or MRI brain [4,9,10,21,22]. Inside the brain, different sub image stacks [14]. Our approach can use not only a regionsneedto beconsideredduringthe registration plane but also a box, a sphere, or even a user-defined process. Therefore, if one uniform intensity value is curve to cut 3D models. More cutting algorithms can be used to represent the organ, homogenous tissue map- added as well to quickly remove the portion that is of ping might not be available. However, in our study we no interest to the users. Hence, with the cutting infor- would like to consider the organs with homogeneous mation, corresponding 2D image stacks can be updated intensities and structures. Therefore, we can use only automatically. Our approach can be used to create the one intensity value to represent the model and use it for desired models very quickly and automatically register registration and model averaging. This would reduce the images. Therefore, our method significantly shortens the registration time and increase the registration accuracy. generic model building time. We used a Windows PC with dual CPUs to create all Discussion the left mandible models. The machine has two 2 GHz Flexible module-based implementation CPU with 2 GB memory. In order to retrieve one left Ourmethodis composedoffivemodules: 3D model mandible model, we need to process an image stack of reconstruction, sub-model of interest creation, produc- size 1024 × 1024 × 500. The current machine setup can- tion of 2D image stacks corresponding to the sub-mod- not process this image stack at one time; therefore, we els, image registration, and generic 3D model creation. process the image stack in three consecutive parts. On Each module in this framework has various algorithms the average we use 16.28 minutes and 14.75 cuts to that can be applied according to the requirements of a retrieveacompleteleftmandible for thefemalemice specific scientific study. group, and 16.2 minutes and 19.25 cuts for the male For 3D model reconstruction from 2D image stacks, mice group (see Table 4). These times include both the the marching cubes algorithm is the most popular one. waiting for the rendering time and the cutting manipu- Moreover, other reconstruction algorithms have been lation time. On the average, it takes 3.31 minutes to developed to improve the quality of the contour geome- render the female mouse model and 3.98 minutes to try [23,24]. Therefore, depending on the application render the male mouse model initially. requirements, different reconstruction algorithms can be used in our method to create polygonal models. Our Image registration cutting tools can be used to process polygonal models Since image registration is an essential step towards created from any reconstruction algorithm. creating generic models, numerous techniques have been developed to register corresponding 2D image Efficiency of the cutting approach stacks or 3D models. For some applications, averaged In order to automatically or semi-automatically create models created from the rigid registration step satisfy generic 3D models, different approaches have been pro- the requirements. For example, in [19], an intensity- posed. However, those generic model building tools based rigid image registration algorithm is applied to either need perfect individual models [5] or require create a generalized shape image (GSI) which represents costly human-computer interactions to retrieve 3D average values of the corresponding pixel intensities models. In [6], a brain atlas of the honeybee was across all the image stacks. Even though this method Table 4 Processing time for model making Female mice Male mice Stack size Image size: 1024 × 1024 Image size: 1024 × 1024 Number of images: 500 Number of images: 500 Average time to create a sub-model from a stack 16.28 minutes 16.2 minutes Average number of cuts performed 14.75 cuts 19.25 cuts Xiao et al. BMC Medical Imaging 2010, 10:5 Page 13 of 15 http://www.biomedcentral.com/1471-2342/10/5 yields some shape variations and not well-registered applied during the registration step are also available for images create local differences from averaged images by visualizing shape changes and numerical morphometri- using the gold standard (e.g. landmark based Procrustes cal analysis such as global and local shape comparisons, average) it still can be used as a screening tool for the strain tensor analysis, and modes of variations analysis initial shape analysis. In [6] iterative averaging is used to [3,6,25]. The transformations are all available through register all the original images to the same reference to ITK [13]. create an average, and then iteratively re-register the Versor based 3D rigid transformation has six para- original images to the new average. Affine and non-rigid meters that represent a 3D rotation and a 3D transla- image registrations are applied in the honeybee brain tion. The rotation is specified by a versor quaternion atlas creation. A subsequent affine registration step and the translation is represented by a vector. The first removes more misaligned shape differences than apply- three parameters define the versor and the last three ing only the rigid registration and creates a sharper parameters represent the translation in each dimen- averaged image, but relative shape differences might still sion. Those parameters are available for further image remain. Nevertheless, compared with automatic deform- analysis. A versor is defined as the quotient between able registration, affine registration requires fewer para- two non-parallel vectors of equal length. Versors meters and the computation time is relatively short. represent an orientation change in a vector, and Therefore, depending on the requirements of the appli- they are a natural representation for rotations in 3D cation, deformable registration can be used repeatedly space [13]. to further remove the misalignments and create still XV  *(X C)C sharper averaged images. If the user wants to create an averaged surface model In theaboveequation, V is a versor. X is a point in that is more like the gold standard Procrustes averaged the 3D space. C is a vector that represents the rigid model, a method for jointly registering and averaging transformation center. The application of the versor 3D surface models, such as the one described in [5], can onto the vector (X-C) is different from the regular vec- be used. Anatomical structures are modeled using a tor product. However, in ITK, we can convert the versor quadrangular mesh. The contour in each image slice is product into the Euclidean matrix format. The 3D rota- detected and then re-sampled using the same number of tion matrix and the translation vector can be calculated points. Then a permutation of points on each contour is from the versor product and can be saved for further performed to guarantee that every point in each model analysis. corresponds to the same anatomical region of the point 3D affine transformation can be represented as: with the same index in all other models. The points are finally averaged to create the generic model. The points XA() X C(TC) are indexed on two integer coordinates, one of which represents the ordering of the initial image stacks. How- where X is a vector and represents a point in the 3D ever, in order to use this approach, we have to pay space, A is a 3 × 3 matrix and represents the affine attention to the alignment in the direction of slice transformation matrix, C is a vector and represents the ordering, since that method assumes that the anatomical transformation center, and T is a vector and represents structures along this direction are aligned automatically the 3D translation. X’ is the new position for X after the by the scanner. Therefore, rigid, affine or deformable affine transformation. The affine registration from ITK registrations should still be used first to ensure that the that we utilized consists of rotation, scaling, shearing anatomical structures along the direction are aligned. and translation in the 3D dimension. There are (3+1) × Subsequently, the multiple 3D anatomical surface mod- 3 parameters in this transformation. The first 3 × 3 els averaging algorithm [5] can be used to create an parameters define A, the last 3 parameters define the averaged surface model. Our package does not provide translations for each dimension. The center of the trans- the quadrangular mesh building algorithm as described formation is automatically calculated from the programs in [5], however our registration programs can still be and is also available. used to align the anatomical structures along the slice B-Spline based non-rigid transformation [3,6,9,13] will ordering direction. generate a dense deformation field where a deformation vector is assigned to every point in the 3D space. The Information on shape variation deformation field is available and can be saved in the The rigid, affine and non-rigid registration algorithms form of a vector image from ITK. The deformation vec- that we employ allow us to align all the subjects vir- tor can be used to further analyze the local shape tually and create the averaged models. Besides having variations. the final averaged 3D models, all the transformations Xiao et al. BMC Medical Imaging 2010, 10:5 Page 14 of 15 http://www.biomedcentral.com/1471-2342/10/5 Applicability of the method objects need to be frequently rotated to perceive their Using our cutting tools to build models from 2D image 3D structures. stacks allows beginners in medical fields to learn anat- omy intuitively and enjoy the process of separating the Conclusions biological structures from the virtual body model before We have developed a new technique that uses virtual dealing with real subjects. Quickly and accurately creat- model cutting and iterative image registration to create ing various 3D averaged models can satisfy the require- generic models from 2D image stacks of a group of indi- ments for a large number of models in virtual crash viduals. Our system allows biologists to build generic 3D testing, therapy planning, and customizing replacement models quickly and accurately. However, particularly body parts. Large scale morphological studies that complicated morphological structures, such as highly require quantification of anatomical features can be branched and convoluted designs that typify vascular or really tedious and might be very detailed and only nervous networks, still pose a challenge to our general- focused on a few important measurements. Our method ized and enhanced method toward generic model crea- facilitates morphological studies by allowing anatomical tion. It is difficult to use the current manual virtual structures to be measured and compared rapidly and in dissection tools to remove such sub-models from initial, more detail. These tools help put morphological analysis unprocessed scans. More convenient and intuitive man- at a similar level to other studies such as genetic and ual virtual dissection methods will be developed in our molecular studies where a large amount of data and future research. Producing deformable models based on measurements can be dealt with relatively quickly. the current tools will also be an area of further develop- The issue of homology, which refers to biological ment. Those deformable averaged models can then be structures that have the same function, is also addressed used for automatically segmenting the anatomical struc- through our method. If we measure an average and do tures. More advanced automated segmentation algo- quantitative comparisons, we would want to compare rithms that utilize generic models will be studied to the same anatomical region. This requires the two mod- enable higher throughput analyses of anatomical struc- els being compared to be first registered correctly with tures in both medical and more general biological con- each other, such that if one area of interest is picked in texts. Quantification of 3D shape variations will also be one model, it refers to the same region in the other studied based on our generic model building technique. model. The iterative registration employed in our approach can, to a large extent, reduce the misalign- Software availability and requirements ments. The method we developed optimizes the func- The implementation of our method is available for free tionalities and technologies of existing toolkits and the downloading at http://www.visualgenomics.ca/~mxiao/ resulting software package allows biologists to build research.html. The current version of the software has their generic models more quickly and accurately. been tested on Unix Solaris 10 and Windows XP with . As our virtual dissection tools are implemented in NET Framework 3.5. In order to run the program from Java, they can run on both regular display systems and our jar files, at least Java 1.6 need to be installed. ImageJ on the state-of-the-art CAVE Automated Virtual Envir- as well as shared (dynamically linked) libraries of VTK onment [26], which is a 3D stereo-based 4-wall display and ITK should also be installed. system installed at the University of Calgary to provide Detailed installation and the user’sguide arealso users with a virtual immersive environment. One of the available on the project website. VTK, ITK and ImageJ advantages of using this virtual reality system as a plat- are all open source and freely available software toolkits. form for our cutting tools is that users can treat both real world objects and virtual world objects quite the Acknowledgements same way, which is not possible in a desktop computing This work has been supported by Genome Canada through Genome environment or even in a single-wall stereo display Alberta; Alberta Science and Research Authority; Western Economic Diversification; the Governments of Canada and of Alberta through the environment. For example, users can move around in Western Economic Partnership Agreement; the iCORE/Sun Microsystems the display environment and view virtual objects from Industrial Research Chair program; the Alberta Network for Proteomics the “inside” such that the details operations can be easily Innovation; and the Canada Foundation for Innovation. We thank Dr. Fred Bookstein for his comments on this project. We thank Wei Liu for scanning understood. By harnessing the power of the CAVE and the mice. We thank Megan Smith for her comments and her help with our cutting tools, users will have more flexibility includ- editing the paper. We also thank the reviewers for their comments. ing a wide variety of viewing perspectives and a high Author details degree of freedom to set locations and orientations of Sun Center of Excellence for Visual Genomics, Department of Biochemistry the cutting tools. This is a definite advantage over and Molecular Biology, Faculty of Medicine, University of Calgary, 3330 ordinary desktop computing environments where the Hospital Drive NW, Calgary, T2N 4N1, Canada. Morphometrics Laboratory, Xiao et al. BMC Medical Imaging 2010, 10:5 Page 15 of 15 http://www.biomedcentral.com/1471-2342/10/5 Department of Cell Biology and Anatomy, Faculty of Medicine, University of 19. Kristensen E, Parsons TE, Hallgrímsson B, Boyd SK: A novel 3D image-based Calgary, 3330 Hospital Drive NW, Calgary, T2N 4N1, Canada. morphological method for phenotypic analysis. IEEE Transactions on Biomedical Engineering 2008, 12:2826-2831. Authors’ contributions 20. Dice LR: Measures of the amount of ecologic association between MX, JS, OEMP, EJS, BH and CWS participated in writing the manuscript and species. Ecology 1945, 26:297-302. designing the technique. MX developed the computing framework. BH and 21. Guimond A, Meunier J, Thirion PJ: Average brain models: a convergence CWS directed the research. All authors read and approved the final version study. Computer Vision and Image Understanding 2000, 2:192-210. of the manuscript. 22. Guimond A, Meunier J, Thirion JP: Automatic computation of average brain models. Lecture Notes in Computer Science: Medical Image Computing Competing interests and Computer-Assisted Intervention 1998-MICCAI’98 Berlin Heidelberg: The authors declare that they have no competing interests. Springer 1998, 1496:631-640. 23. Schaefer S, Warren J: Dual marching cubes: primal contouring of dual Received: 5 August 2009 grids. Proceedings of the 12th Pacific Conference on Computer Graphics and Accepted: 8 February 2010 Published: 8 February 2010 Applications: October 2004 Seoul, Korea 2004, 70-76. 24. Schaefer S, Ju T, Warren J: Manifold dual Contouring. IEEE Transactions on Visualization and Computer Graphics 2007, 3:610-619. References 25. Bayly PV, Black EE, Pedersen RC, Leister EP, Genin GM: In vivo imaging of 1. Thompson PM, Mega MS, Narr KL, Sowell ER, Blanton RE, Toga AW: Brain rapid deformation and strain in an animal model of traumatic brain image analysis and atlas construction. Handbook of Medical Imaging: injury. Journal of Biomechanics 2006, 6:1086-1095. Medical Image Processing and Analysis SPIE PressSonka M, Fitzpatrick JM 26. Sensen CW: Using CAVE® technology for functional genomics studies. 2000, 2:1063-1119. Diabetes Technology & Therapeutics 2002, 4:867-871. 2. Small CG: The Statistical Theory of Shape New York: Springer 1996. 3. Olafsdottir H, Darvann TA, Hermann NV, Oubel E, Ersboll BK, Fangi AF, Pre-publication history Larsen P, Perlyn CA, Morriss-Key GM, Kreiborg S: Computational mouse The pre-publication history for this paper can be accessed here:http://www. atlases and their application to automatic assessment of craniofacial biomedcentral.com/1471-2342/10/5/prepub dysmorphology caused by the crouzon mutation Fgfr2C342Y. Journal of Anatomy 2007, 211:37-52. doi:10.1186/1471-2342-10-5 4. Barratt DC, Chan CSK, Edwards PJ, Penney GP, Slomczykowski M, Carer TJ, Cite this article as: Xiao et al.: Building generic anatomical models using Hawkes DJ: Instantiation and registration of statistical shape models of virtual model cutting and iterative registration. BMC Medical Imaging the femur and pelvis using 3D ultrasound imaging. Medical Image 2010 10:5. Analysis 2008, 12:258-374. 5. Maschino E, Maurin Y, Andrey P: Joint registration and averaging of multiple 3D anatomical surface models. Computer Vision and Image Understanding 2006, 1:16-30. 6. Brandt R, Rohlfing T, Rybak J, Krofczik S, Maye A, Westerhoff M, Hege HC, Menzel R: Three-dimensional average-shape atlas of the honeybee brain and its applications. The Journal of Comparative Neurology 2005, 492:1-19. 7. Avants B, Gee JC: Shape averaging with differmorphic flows for atlas creation. Proceedings of the IEEE International Symposium on Biomedical Imaging, 1: April 2004 Arlington, VA 2004, 595-598. 8. Argall BD, Saad ZS, Beauchamp MS: Simplified intersubject averaging on the cortical surface using SUMA. Human Brain Mapping 2006, 27:14-27. 9. Ruckert D, Frangi AF, Schnabel JA: Automatic construction of 3D statistical deformation models using non-rigid registration. Lecture Notes in Computer Science: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2001 Berlin Heidelberg: SpringerNiessen WJ, Viergever MA 2001, 2208:77-84. 10. Rajamani KT, Styner MA, Talib H, Zheng G, Nolte LP, Ballester MAG: Statistical deformable bone models for robust 3D surface extrapolation from sparse data. Medical Image Analysis 2007, 11:99-109. 11. Schmutz B, Reynolds KJ, Slavotinek JP: Development and validation of a generic 3D model of the distal femur. Computer Methods in Biomechanics and Biomedical Engineering 2006, 5:305-312. 12. Zachow S, Zilske M, Hege HC: 3D reconstruction of individual anatomy from medical image data: segmentation and geometry processing. Proceedings of the CADFEM Users Meeting Dresden, Germany 2007. 13. Yoo T, Ed: Insight into Images AK Peters 2004. 14. Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, Gerig G: User- guided 3D active contour segmentation of anatomical structures: Submit your next manuscript to BioMed Central Significantly improved efficiency and reliability. Neuroimage 2006, 3:1116-1128. and take full advantage of: 15. Chen T, Metaxas D: A hybrid framework for 3D medical image segmentation. Medical Image Analysis 2005, 6:547-565. • Convenient online submission 16. Xiao M, Soh J, Meruvia-Pastor O, Osborn D, Lam N, Hallgrímsson B, • Thorough peer review Sensen CW: An efficient virtual dissection tool to create generic models for anatomical atlases. Studies in Health Technology and Informatics 2009, • No space constraints or color figure charges 142:426-428. • Immediate publication on acceptance 17. Schroeder W, Martin K, Lorensen B: The Visualization Toolkit Prentice-Hall • Inclusion in PubMed, CAS, Scopus and Google Scholar 18. Rasband WS: ImageJ. U. S. National Institutes of Health, Bethesda, Maryland, • Research which is freely available for redistribution USA 1997http://rsb.info.nih.gov/ij/. Submit your manuscript at www.biomedcentral.com/submit

Journal

BMC Medical ImagingSpringer Journals

Published: Feb 8, 2010

There are no references for this article.