Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Unsupervised Neural Techniques Applied to MR Brain Image Segmentation

Unsupervised Neural Techniques Applied to MR Brain Image Segmentation Unsupervised Neural Techniques Applied to MR Brain Image Segmentation Hindawi Publishing Corporation Home Journals About Us About this Journal Submit a Manuscript Table of Contents Journal Menu Abstracting and Indexing Aims and Scope Article Processing Charges Articles in Press Author Guidelines Bibliographic Information Contact Information Editorial Board Editorial Workflow Free eTOC Alerts Reviewers Acknowledgment Subscription Information Open Special Issues Published Special Issues Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article Complete Special Issue Advances in Artificial Neural Systems Volume 2012 (2012), Article ID 457590, 7 pages doi:10.1155/2012/457590 Research Article Unsupervised Neural Techniques Applied to MR Brain Image Segmentation A. Ortiz , 1 J. M. Gorriz , 2 J. Ramirez , 2 and D. Salas-Gonzalez 2 1 Department of Communication Engineering, University of Malaga, 29071 Malaga, Spain 2 Department of Signal Theory, Networking and Communications, University of Granada, 18071 Granada, Spain Received 17 February 2012; Accepted 14 April 2012 Academic Editor: Anke Meyer-Baese Copyright © 2012 A. Ortiz et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract The primary goal of brain image segmentation is to partition a given brain image into different regions representing anatomical structures. Magnetic resonance image (MRI) segmentation is especially interesting, since accurate segmentation in white matter, grey matter and cerebrospinal fluid provides a way to identify many brain disorders such as dementia, schizophrenia or Alzheimer’s disease (AD). Then, image segmentation results in a very interesting tool for neuroanatomical analyses. In this paper we show three alternatives to MR brain image segmentation algorithms, with the Self-Organizing Map (SOM) as the core of the algorithms. The procedures devised do not use any a priori knowledge about voxel class assignment, and results in fully-unsupervised methods for MRI segmentation, making it possible to automatically discover different tissue classes. Our algorithm has been tested using the images from the Internet Brain Image Repository (IBSR) outperforming existing methods, providing values for the average overlap metric of 0.7 for the white and grey matter and 0.45 for the cerebrospinal fluid. Furthermore, it also provides good results for high-resolution MR images provided by the Nuclear Medicine Service of the “Virgen de las Nieves” Hospital (Granada, Spain). 1. Introduction Nowadays, magnetic resonance imaging (MRI) systems provide an excellent spatial resolution as well as a high tissue contrast. Nevertheless, since actual MRI systems can obtain 16-bit depth images corresponding to 65535 gray levels, the human eye is not able to distinguish more than several tens of gray levels. On the other hand, MRI systems provide images as slices which compose the 3D volume. Thus, computer-aided tools are necessary to exploit all the information contained in an MRI. These are becoming a very valuable tool for diagnosing some brain disorders such as Alzheimer’s disease [ 1 – 5 ]. Moreover, modern computers, which contain a large amount of memory and several processing cores, have enough process capabilities for analyzing the MRI in reasonable time. Image segmentation consists in partitioning an image into different regions. In MRI, segmentation consists of partitioning the image into different neuroanatomical structures which corresponds to different tissues. Hence, analyzing the neuroanatomical structures and the distribution of the tissues on the image, brain disorders or anomalies can be figured out. Hence, the importance of having effective tools for grouping and recognizing different anatomical tissues, structures and fluids is growing with the improvement of the medical imaging systems. These tools are usually trained to recognize the three basic tissue classes found on a healthy brain MR image: white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). All of the nonrecognized tissues or fluids are classified as suspect, to be pathological. The segmentation process can be performed in two ways. The first consists of manual delimitation of the structures present within an image by an expert. The second consists of using an automatic segmentation technique. As commented before, computer image processing techniques allow exploiting all the information contained in an MRI. There are several automatic segmentation techniques. Some of them use the information contained in the image histogram [ 6 – 11 ]. This way, since different contrast areas should correspond with different tissues, the image histogram can be used for partitioning the image. Nevertheless, variations on the contrast of the same tissue are found in an image due to RF noise or shading effects due to magnetic field variations, resulting in tissue misclassification. Other methods use statistical classifiers based on the expectation-maximization (EM) algorithms [ 12 – 14 ], maximum likelihood (ML) estimation [ 15 ], or Markov random fields [ 16 , 17 ]. Other segmentation techniques are based on artificial neural network classifiers [ 8 , 18 – 21 ] such as self-organizing maps (SOMs) [ 18 , 19 , 21 – 23 ]. In this paper we present three segmentation alternatives based on SOMs, which provide good results over the internet brain image repository (IBSR) [ 16 ] images. 2. SOM Algorithm SOM is an unsupervised classifier proposed by Kohonen and it has been used for a large number of applications regarding classification or modelling [ 24 ]. The self-organizing process is based on the distance (usually the Euclidean distance) computation among each training sample and all the units on the map as a part of a competitive learning process. On the other hand, several issues such as topological map, number of units on the map, initialization of weights, and the training process on the map are decisive for the classification quality. Regarding the topology, a 2D hexagonal grid was selected since it fitted better in the feature space as shown in the experiments. The SOM algorithm can be summarized as follows. Let 𝑋 ⊂ ℝ 𝑑 be the data manifold. In each iteration, the winning unit is computed according to 𝑈 𝜔 ( 𝑡 ) = a r g m i n 𝑖  ‖ ‖ 𝑥 ( 𝑡 ) − 𝜔 𝑖 ‖ ‖  , ( 𝑡 ) ( 1 ) where 𝑥 ( 𝑡 ) , 𝑥 ∈ 𝑋 , is the input vector at time 𝑡 and 𝜔 𝑖 ( 𝑡 ) is the prototype vector associated with the unit 𝑖 . The unit closer to the input vector 𝑈 𝜔 ( 𝑡 ) is referred to as winning unit and the associated prototype is updated. To complete the adaptive learning process on the SOM, the prototypes of the units in the neighborhood of the winning unit are also updated according to: 𝜔 𝑖 ( 𝑡 + 1 ) = 𝜔 𝑖 ( 𝑡 ) + 𝛼 ( 𝑡 ) ℎ 𝑈 𝑖  𝑥 ( 𝑡 ) ( 𝑡 ) − 𝜔 𝑖  ( 𝑡 ) , ( 2 ) where 𝛼 ( 𝑡 ) is the exponential decay learning factor and ℎ 𝑈 𝑖 ( 𝑡 ) is the neighborhood function associated with the unit 𝑖 . Both, the learning factor and the neighborhood function decay with time; thus the prototypes adaptation becomes slower as the neighborhood of the unit 𝑖 contains less number of units: ℎ 𝑈 𝑖 ( 𝑡 ) = 𝑒 ( − ‖ 𝑟 𝑈 − 𝑟 𝑖 ‖ 2 / 2 𝜎 ( 𝑡 ) 2 ) . ( 3 ) Equation ( 3 ) shows the neighbourhood function, where 𝑟 𝑖 represents the position on the output space and ‖ 𝑟 𝑈 − 𝑟 𝑖 ‖ is the distance between the winning unit and the unit 𝑖 on the output space. The neighbourhood is defined by a Gaussian function which shrinks in each iteration as shown in ( 4 ). In this competitive process, the winning unit is named the best matching unit (BMU). On the other hand, 𝜎 ( 𝑡 ) controls the reduction of the Gaussian neighborhood in each iteration. 𝜏 1 is a time constant which depends on the number of iterations and the map radius and computed as 𝜏 1 = number_of_iterations/map_radius: 𝜎 ( 𝑡 ) = 𝜎 0 𝑒 ( − 𝑡 / 𝜏 1 ) . ( 4 ) The quality of the trained map can be computed by the means of two measures. These two measures are the quantization error ( 𝑡 𝑒 ) , which determines the average distance between each data vector and its best matching unit (BMU) and the topological error ( 𝑞 𝑒 ) , which measures the proportion of all data vectors for which first and second BMUs are not adjacent units. Both, the quantization error and the topological error are defined by the following: 𝑡 𝑒 = 1 𝑁 𝑁  𝑖 = 1 𝑢  ⃗ 𝑥 𝑖  𝑞 , ( 5 ) 𝑒 = 𝑁  𝑖 = 1 ‖ ‖ ⃗ 𝑥 𝑖 −  𝑏 ⃗ 𝑥 𝑖 ‖ ‖ . ( 6 ) In ( 5 ), 𝑁 is the total number of data vectors, and 𝑢 ( ⃗ 𝑥 𝑖 ) is 1 if the first and the second BMU for ⃗ 𝑥 𝑖 are nonadjacent and 0 otherwise. In ( 6 ) the quantization error is defined where ⃗ 𝑥 𝑖 is the 𝑖 th data vector on the input space and 𝑏 ⃗ 𝜔 𝑖 is the weight (prototype) associated with the best matching unit for the data vector ⃗ 𝑥 𝑖 . Therefore, lower values of 𝑡 𝑒 and 𝑞 𝑒 imply a better topology preservation, which is equivalent to a better clustering result. That is to say, the lower the values on the quantization error ( 𝑞 𝑒 ) and the topological error ( 𝑡 𝑒 ) , the better the goodness of the SOM [ 25 , 26 ]. In this paper, SOM toolbox [ 27 ] has been used to implement SOM. 3. MR Image Segmentation with SOM In this section we present two image segmentation algorithms based on unsupervised SOM. The first uses the histogram to segment the whole volume (i.e., classify all the voxels on the volumetric image). The second extracts a set of features from each image slice and uses an SOM to classify the feature vectors into clusters using the devised entropy gradient clustering method. Thus, Figure 1 shows the block diagram of the presented segmentation algorithms. Figure 1: Block diagram of the segmentation methods. 3.1. Image Preprocessing Once the MR image has been acquired, a preprocessing is performed in order to remove noise and to homogenize the image background. The brain extraction for undesired structures removal (i.e., skull and scalp) can be done at this stage. There are several algorithms for this purpose such as brain surface extractor (BSE), brain extraction tool (BET) [ 8 ], Minneapolis consensus strip (McStrip), or hybrid watershed algorithm (HWA) [ 2 ]. Since IBSR 1.0 images have these undesired structures already removed, brain extraction is not required. Nevertheless, images provided by IBSR 2.0 are distributed without the scalp/skull already removed. In these images, the brain has been extracted in the preprocessing stage using BET. 3.2. Segmentation Using the Volume Image Histogram (HFS-SOM) The first step after preprocessing the image consists in computing the volume image histogram which describes the probability of occurrence of voxel intensities in the volume image and provides information regarding different tissues. A common approach to avoid processing the large number of voxels present on MR images consists in modelling the intensity values as a finite number of prototypes, which deals to improve the computational effectiveness. After computing the histogram, the bin 0 is removed since it contains all the background voxels. Thus, only information corresponding to the brain is stored. Figure 2 shows the rendered brain surface from the IBSR volume 12, and its histogram. Figure 2: Rendered brain surface extracted from IBSR_12 volume (a) and computed histogram (b). Histogram data including the intensity occurrence probabilities ( 𝑝 𝑖 ) and the relative position (bin number), 𝑏 𝑖 , are used to compose the feature vectors ⃗ 𝐹 = ( 𝑝 𝑖 , 𝑏 𝑖 ) , 𝑝 𝑖 ∈ ℝ , b 𝑖 ∈ ℤ , to be classified by the SOM. On a trained SOM, the output layer is composed by a reduced number of prototypes (the number of units on the output layer) modelling the input data manifold. In addition, the most similar prototypes are closely located in the output map at the time the most dissimilar are located apart. Nevertheless, since all the units have an associated prototype, it is necessary to cluster the SOMs in order to define the borders between clusters. In other words, each prototype is grouped so that it belongs to a cluster. Thus, the 𝑘 -means algorithm is used to cluster the SOMs, grouping the prototypes into a number of different classes, and the DBI [ 28 ], which gives lower values for better clustering results, is computed for different 𝑘 values to provide a measurement of the clustering validity. The clusters on the SOM group the units so that they belong to a specific class. As each of these units will be the BMU of a specific set of voxels, the clusters define different voxel classes. This way, each voxel is labeled as belonging to a class (i.e., segment). 3.3. MR Image Segmentation with SOM and the Entropy-Gradient Algorithm (EGS-SOM) The method described in this section is also based on SOM for voxel classification, but histogram information from the image volume is replaced by computing a set of features, selecting the most discriminant ones. After that, SOM clustering is performed by the EGS-SOM method described here in after, which allows us to obtain higher-resolution images providing good segmentation results as shown in the experiments. 3.3.1. Feature Extraction and Selection In this stage some significant features from the MR image are extracted to be subjected to classification. As commented before, we perform the image processing slice by slice on each plane. Thus, the feature extraction is carried out by using an overlapping and sliding window of 7 × 7 pixels on each slice of a specific plane. In the feature extraction process, window size plays an important role since smaller windows are not able to capture the second-order features, that is, texture information. The use of higher window sizes results in loosing resolution. Therefore the 7 × 7 size gives a good trade-off between complexity and performance. In this paper we use first- and second-order statistical features [ 29 ]. The first-order features we extract from the image are intensity, mean, and variance. The intensity is referred to the gray level of the center pixel on the window. The mean and variance are calculated taking into account the gray level present on the window. On the other hand, we additionally use second-order features such as textural features. Haralick et al. [ 30 ] proposed the use of 14 features for image classification, computed using the gray level coocurrence matrix (GLCM) method. The set of second-order features we have used are energy, entropy, contrast, angular second moment (ASM), sum average, autocorrelation, correlation, inverse difference moment, maximum probability, cluster prominence, cluster shade, dissimilarity, and second-order variance as well as moment invariants [ 31 ]. In order to select the most discriminant features, a genetic algorithm is used to minimize the topological and the quantization error on the SOM through the fitness function shown in ( 7 ) 𝐹 𝑄 𝑇 =  0 . 5 𝑞 𝑒 + 0 . 5 𝑡 𝑒  . ( 7 ) The feature selection process is summarized in Figure 3 . Figure 3: Feature selection process with genetic algorithm. The stop criterion is reached when the performance of the proposed solutions does not improve the performance significantly (1%) or the maximum number of generations is reached (500). Once the dimension of the feature space has been reduced, we use the vectors of this space for training a SOM. The topology of the map and the number of units on the map are decisive for the SOM quality. In that sense, we use a hexagonal grid since it allows better fitting the prototypes to the feature space vectors. Each BMU on the SOM has an associated pixel on the image. This association is made through a matrix computed during the feature extraction phase which stores the coordinates of the central pixel on each window. This allows associating a feature vector to an image pixel. Nevertheless, these clusters roughly define the different areas (segments) on the image, and a further fine-tuning phase is required. This fine-tuning phase is accomplished by the entropy-gradient method. Entropy-Gradient Method The procedure devised consists of using the feature vectors associated with each BMU to compute a similarity measurement among the vectors belonging with each BMU and the vectors associated to each other BMU. Next, the BMUs are sorted in ascending order of the contrast. Finally, the feature vectors of each BMU are included on a cluster. For each map unit, we compute the accumulated entropy: 𝐻 𝑚 𝑖 = 𝑁 𝑝  𝑛 = 1 𝐻 𝑛 , ( 8 ) where 𝑖 is the map unit index and 𝑁 𝑝 the number of pixels belonging to the map unit in the classification process. This means that the unit 𝑖 has a number of 𝑁 𝑝 -associated pixels. Since the output layer on the SOM is a two-dimensional space, we calculate the entropy-gradient vector from each map unit ( 8 ) and move to the opposite direction for clustering. 4. Results and Discussion In this section we show the segmentation results obtained using real MR brain images from two different sources. One of these sources is the IBSR database [ 32 ] in two versions, IBSR and IBSR 2.0. Figures 4(a) and 4(b) show the segmentation results for the IBSR volume 100_23 using the HFS-SOM algorithm and the EGS-SOM algorithm, respectively. In these images, WM, GM, and CSF are shown for slices 120, 130, 140, 150, 160, and 170 on the axial plane. Expert segmentation from IBSR database is shown in Figure 4(c) . Figure 4: Segmentation of the IBSR volume 100_23 using the HFS-SOM algorithm (a) and the EGS-SOM algorithm (b). Ground Truth is shown in (c). Slices 120, 130, 140, 150, 160, and 170 on the axial plane are shown on each column. First column corresponds to WM, second column to GM, and third column to CSF. Figure 5(a) shows the segmentation results for the IBSR 2.0 volume 12 using the fast volume segmentation algorithm. In this figure, each row corresponds to a tissue and each image column corresponds to a different slice. In the same way, Figure 5(b) shows the same slices of Figure 5(b) but the segmentation is performed using the EGS-SOM algorithm. Figure 5(c) shows the segmentation performed by expert radiologists provided by the IBSR database (ground truth). Figure 5: Segmentation of the IBSR 2.0 volume 12 using the HFS-SOM algorithm (a) and the EGS-SOM algorithm (b). Ground Truth is shown in (c). Slices 110, 120, 130, 140, 150, and 160 on the axial plane are shown on each column. First column corresponds to WM, second column to GM, and third column to CSF. Visual comparison between automatic segmentation and the ground truth points up that the EGS-SOM method outperforms the fast volume segmentation method. This fact is also stated in Figure 6 where Tanimoto’s index is shown for different segmentation algorithms, where SSOM corresponds to our entropy-gradient algorithm, BMAP is biased map [ 33 ], AMAP is adaptative map [ 33 ], MAP is maximum a posteriori probability [ 34 ], MLC is maximum likelihood [ 35 ], FUZZY is fuzzy k -means [ 36 ] and TSKMEANS is tree-structured k -means [ 36 ]. The performance of the presented segmentation techniques has been evaluated by computing the average overlap rate through Tanimoto’s index, as it has been widely used by other authors to compare the segmentation performance of their proposals [ 13 , 16 , 17 , 21 , 26 , 37 – 41 ]. Tanimoto’s index can be defined as 𝑇  𝑆 1 , 𝑆 2  = | | 𝑆 1 ∩ 𝑆 2 | | | | 𝑆 1 ∪ 𝑆 2 | | , ( 9 ) where 𝑆 1 is the segmentation set and 𝑆 2 is the ground truth. Figure 6: Average overlap metric comparison for different segmentation methods. 5. Conclusions In this paper we presented fully unsupervised segmentation methods for MR images based on hybrid artificial intelligence techniques for improving the feature extraction process and self-organizing maps for pixel classification. The use of a genetic algorithm provides a way for training the Self-Organizing map used as a classifier in the most efficient way. This is because the dimension of the training samples (feature vectors) has been reduced in order to be enough discriminant but not redundant. As a result, the number of units (neurons) on the map is also optimized as well as the classification process. Thus, we take advantage of the competitive learning model of the SOM which groups the pixels into clusters. This competitive process discovers similarities among the pixels, resulting in an unsupervised way to segment the image. Moreover, the clusters’ borders are redefined by using the entropy-gradient method presented on this paper. The whole process allows figuring out the segments present on the image without using any a priori information. The results shown in Section 4 have been compared with the segmentations provided by the IBSR database that outperform the results obtained by other algorithms such as k -means or fuzzy k -means. The number of segments or different tissues found in an MR image is figured out automatically making possible to find out tissues which could be identified with a pathology. Acknowledgment This work was partly supported by the Consejería de Innovación, Ciencia y Empresa (Junta de Andalucía, Spain) under the Excellence Projects TIC-02566 and TIC-4530. References I. A. Illán, J. M. Górriz, J. Ramírez et al., “18F-FDG PET imaging analysis for computer aided Alzheimer's diagnosis,” Information Sciences , vol. 181, no. 4, pp. 903–916, 2011. View at Publisher · View at Google Scholar · View at Scopus I. A. Illán, J. M. Górriz, M. M. López et al., “Computer aided diagnosis of Alzheimer's disease using component based SVM,” Applied Soft Computing Journal , vol. 11, no. 2, pp. 2376–2382, 2011. View at Publisher · View at Google Scholar · View at Scopus J. M. Górriz, F. Segovia, J. Ramírez, A. Lassl, and D. Salas-Gonzalez, “GMM based SPECT image classification for the diagnosis of Alzheimer's disease,” Applied Soft Computing Journal , vol. 11, no. 2, pp. 2313–2325, 2011. View at Publisher · View at Google Scholar · View at Scopus M. Kamber, R. Shinghal, D. L. Collins, G. S. Francis, and A. C. Evans, “Model-based 3-D segmentation of multiple sclerosis lesions in magnetic resonance brain images,” IEEE Transactions on Medical Imaging , vol. 14, no. 3, pp. 442–453, 1995. View at Publisher · View at Google Scholar · View at Scopus J. Ramírez, J. M. Górriz, D. Salas-Gonzalez, et al., “Computer-aided diagnosis of Alzheimer's type dementia combining support vector machines and discriminant set of features,” Information Sciences . In press. D. N. Kennedy, P. A. Filipek, and V. S. Caviness, “Anatomic segmentation and volumetric calculations in nuclear magnetic resonance imaging,” IEEE Transactions on Medical Imaging , vol. 8, no. 1, pp. 1–7, 1989. View at Publisher · View at Google Scholar · View at Scopus A. Khan, S. F. Tahir, A. Majid, and T. S. Choi, “Machine learning based adaptive watermark decoding in view of anticipated attack,” Pattern Recognition , vol. 41, no. 8, pp. 2594–2610, 2008. View at Publisher · View at Google Scholar · View at Scopus Z. Yang and J. Laaksonen, “Interactive retrieval in facial image database using self-organizing maps,” in Proceedings of the MVA , 2005. M. García-Sebastián, E. Fernández, M. Graña, and F. J. Torrealdea, “A parametric gradient descent MRI intensity inhomogeneity correction algorithm,” Pattern Recognition Letters , vol. 28, no. 13, pp. 1657–1666, 2007. View at Publisher · View at Google Scholar · View at Scopus E. Fernández, M. Graña, and J. R. Cabello, “Gradient based evolution strategy for parametric illumination correction,” Electronics Letters , vol. 40, no. 9, pp. 531–532, 2004. View at Publisher · View at Google Scholar · View at Scopus M. García-Sebastián, A. Isabel González, and M. Graña, “An adaptive field rule for non-parametric MRI intensity inhomogeneity estimation algorithm,” Neurocomputing , vol. 72, no. 16-18, pp. 3556–3569, 2009. View at Publisher · View at Google Scholar · View at Scopus T. Kapur, L. Grimson, W. M. Wells, and R. Kikinis, “Segmentation of brain tissue from magnetic resonance images,” Medical Image Analysis , vol. 1, no. 2, pp. 109–127, 1996. View at Scopus Y. F. Tsai, I. J. Chiang, Y. C. Lee, C. C. Liao, and K. L. Wang, “Automatic MRI meningioma segmentation using estimation maximization,” in Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society (IEEE-EMBS '05) , pp. 3074–3077, September 2005. View at Scopus J. Xie and H. T. Tsui, “Image segmentation based on maximum-likelihood estimation and optimum entropy-distribution (MLE-OED),” Pattern Recognition Letters , vol. 25, no. 10, pp. 1133–1141, 2004. View at Publisher · View at Google Scholar · View at Scopus Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm,” IEEE Transactions on Medical Imaging , vol. 20, no. 1, pp. 45–57, 2001. View at Publisher · View at Google Scholar · View at Scopus N. A. Mohamed, M. N. Ahmed, and A. Farag, “Modified fuzzy c-mean in medical image segmentation,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '99) , pp. 3429–3432, March 1999. View at Scopus W. M. Wells III, W. E. L. Crimson, R. Kikinis, and F. A. Jolesz, “Adaptive segmentation of mri data,” IEEE Transactions on Medical Imaging , vol. 15, no. 4, pp. 429–442, 1996. View at Scopus D. Tian and L. Fan, “A brain MR images segmentation method based on SOM neural network,” in Proceedings of the 1st International Conference on Bioinformatics and Biomedical Engineering (ICBBE '07) , pp. 686–689, July 2007. View at Publisher · View at Google Scholar · View at Scopus I. Güler, A. Demirhan, and R. Karakiş, “Interpretation of MR images using self-organizing maps and knowledge-based expert systems,” Digital Signal Processing , vol. 19, no. 4, pp. 668–677, 2009. View at Publisher · View at Google Scholar · View at Scopus P. K. Sahoo, S. Soltani, and A. K. C. Wong, “A survey of thresholding techniques,” Computer Vision, Graphics and Image Processing , vol. 41, no. 2, pp. 233–260, 1988. View at Scopus W. Sun, “Segmentation method of MRI using fuzzy Gaussian basis neural network,” Neural Information Processing , vol. 8, no. 2, pp. 19–24, 2005. J. Alirezaie, M. E. Jernigan, and C. Nahmias, “Automatic segmentation of cerebral MR images using artificial neural networks,” IEEE Transactions on Nuclear Science , vol. 45, no. 4, pp. 2174–2182, 1998. View at Scopus A. Ortiz, J. M. Górriz, J. Ramírez, and D. Salas-Gonzalez, “MR brain image segmentation by hierarchical growing SOM and probability clustering,” Electronics Letters , vol. 47, no. 10, pp. 585–586, 2011. T. Kohonen, Self-Organizing Maps , Springer, 2001. E. Arsuaga and F. Díaz, “Topology preservation in SOM,” International Journal of Mathematical and Computer Sciences , vol. 1, no. 1, pp. 19–22, 2005. K. Taşdemir and E. Merényi, “Exploiting data topology in visualization and clustering of self-organizing maps,” IEEE Transactions on Neural Networks , vol. 20, no. 4, pp. 549–562, 2009. View at Publisher · View at Google Scholar · View at Scopus E. Alhoniemi, J. Himberg, J. Parhankagas, and J. Vesanta, “SOM Toolbox for Matlab v2.0,” 2005, http://www.cis.hut.fi/projects/somtoolbox . M. O. Stitson, J. A. E. Weston, A. Gammerman, V. Vork, and V. Vapnik, “Theory of support vector machines,” Tech. Rep. CSD-TR-96-17, Department of Computer Science, Royal Holloway College, University of London, 1996. M. Nixson and A. Aguado, Feature Extraction and Image Processing , Academic Press, 2008. R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics , vol. 3, no. 6, pp. 610–621, 1973. View at Scopus M. Hu, “Visual pattern recognition by moments invariants,” IRE Transactions on Information Theory , vol. 8, pp. 179–187, 1962. Internet Brain Database Repository, Massachusetts General Hospital, Center for Morphometric Analysis, 2010, http://www.cma.mgh.harvard.edu/ibsr/data.html . J. C. Rajapakse and F. Kruggel, “Segmentation of MR images with intensity inhomogeneities,” Image and Vision Computing , vol. 16, no. 3, pp. 165–180, 1998. View at Scopus J. L. Marroquin, B. C. Vemuri, S. Botello, F. Calderon, and A. Fernandez-Bouzas, “An accurate and efficient Bayesian method for automatic segmentation of brain MRI,” IEEE Transactions on Medical Imaging , vol. 21, no. 8, pp. 934–945, 2002. View at Publisher · View at Google Scholar · View at Scopus J. C. Bezdek, L. O. Hall, and L. P. Clarke, “Review of MR image segmentation techniques using pattern recognition,” Medical Physics , vol. 20, no. 4, pp. 1033–1048, 1993. View at Publisher · View at Google Scholar · View at Scopus L. P. Clarke, R. P. Velthuizen, M. A. Camacho et al., “MRI segmentation: methods and applications,” Magnetic Resonance Imaging , vol. 13, no. 3, pp. 343–368, 1995. View at Publisher · View at Google Scholar · View at Scopus C. T. Su and H. C. Lin, “Applying electromagnetism-like mechanism for feature selection,” Information Sciences , vol. 181, no. 5, pp. 972–986, 2011. View at Publisher · View at Google Scholar · View at Scopus K. Tan, E. Khor, and T. Lee, Multiobjective Evolutionary and Applications , Springer, 1st edition, 2005. T. Tasdizen, S. P. Awate, R. T. Whitaker, and N. L. Foster, “MRI tissue classification with neighborhood statistics: a nonparametric, entropy-minimizing approach,” in Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI '05) , 2005. I. Usman and A. Khan, “BCH coding and intelligent watermark embedding: employing both frequency and strength selection,” Applied Soft Computing Journal , vol. 10, no. 1, pp. 332–343, 2010. View at Publisher · View at Google Scholar · View at Scopus Y. Wang, T. Adali, S. Y. Kung, and Z. Szabo, “Quantification and segmentation of brain tissues from MR images: a probabilistic neural network approach,” IEEE Transactions on Image Processing , vol. 7, no. 8, pp. 1165–1181, 1998. View at Scopus var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-8578054-2']); _gaq.push(['_setDomainName', 'hindawi.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Artificial Neural Systems Hindawi Publishing Corporation

Unsupervised Neural Techniques Applied to MR Brain Image Segmentation

Loading next page...
 
/lp/hindawi-publishing-corporation/unsupervised-neural-techniques-applied-to-mr-brain-image-segmentation-tTmf0CgzMs

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2012 A. Ortiz et al.
ISSN
1687-7594
eISSN
1687-7608
Publisher site
See Article on Publisher Site

Abstract

Unsupervised Neural Techniques Applied to MR Brain Image Segmentation Hindawi Publishing Corporation Home Journals About Us About this Journal Submit a Manuscript Table of Contents Journal Menu Abstracting and Indexing Aims and Scope Article Processing Charges Articles in Press Author Guidelines Bibliographic Information Contact Information Editorial Board Editorial Workflow Free eTOC Alerts Reviewers Acknowledgment Subscription Information Open Special Issues Published Special Issues Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article Complete Special Issue Advances in Artificial Neural Systems Volume 2012 (2012), Article ID 457590, 7 pages doi:10.1155/2012/457590 Research Article Unsupervised Neural Techniques Applied to MR Brain Image Segmentation A. Ortiz , 1 J. M. Gorriz , 2 J. Ramirez , 2 and D. Salas-Gonzalez 2 1 Department of Communication Engineering, University of Malaga, 29071 Malaga, Spain 2 Department of Signal Theory, Networking and Communications, University of Granada, 18071 Granada, Spain Received 17 February 2012; Accepted 14 April 2012 Academic Editor: Anke Meyer-Baese Copyright © 2012 A. Ortiz et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract The primary goal of brain image segmentation is to partition a given brain image into different regions representing anatomical structures. Magnetic resonance image (MRI) segmentation is especially interesting, since accurate segmentation in white matter, grey matter and cerebrospinal fluid provides a way to identify many brain disorders such as dementia, schizophrenia or Alzheimer’s disease (AD). Then, image segmentation results in a very interesting tool for neuroanatomical analyses. In this paper we show three alternatives to MR brain image segmentation algorithms, with the Self-Organizing Map (SOM) as the core of the algorithms. The procedures devised do not use any a priori knowledge about voxel class assignment, and results in fully-unsupervised methods for MRI segmentation, making it possible to automatically discover different tissue classes. Our algorithm has been tested using the images from the Internet Brain Image Repository (IBSR) outperforming existing methods, providing values for the average overlap metric of 0.7 for the white and grey matter and 0.45 for the cerebrospinal fluid. Furthermore, it also provides good results for high-resolution MR images provided by the Nuclear Medicine Service of the “Virgen de las Nieves” Hospital (Granada, Spain). 1. Introduction Nowadays, magnetic resonance imaging (MRI) systems provide an excellent spatial resolution as well as a high tissue contrast. Nevertheless, since actual MRI systems can obtain 16-bit depth images corresponding to 65535 gray levels, the human eye is not able to distinguish more than several tens of gray levels. On the other hand, MRI systems provide images as slices which compose the 3D volume. Thus, computer-aided tools are necessary to exploit all the information contained in an MRI. These are becoming a very valuable tool for diagnosing some brain disorders such as Alzheimer’s disease [ 1 – 5 ]. Moreover, modern computers, which contain a large amount of memory and several processing cores, have enough process capabilities for analyzing the MRI in reasonable time. Image segmentation consists in partitioning an image into different regions. In MRI, segmentation consists of partitioning the image into different neuroanatomical structures which corresponds to different tissues. Hence, analyzing the neuroanatomical structures and the distribution of the tissues on the image, brain disorders or anomalies can be figured out. Hence, the importance of having effective tools for grouping and recognizing different anatomical tissues, structures and fluids is growing with the improvement of the medical imaging systems. These tools are usually trained to recognize the three basic tissue classes found on a healthy brain MR image: white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). All of the nonrecognized tissues or fluids are classified as suspect, to be pathological. The segmentation process can be performed in two ways. The first consists of manual delimitation of the structures present within an image by an expert. The second consists of using an automatic segmentation technique. As commented before, computer image processing techniques allow exploiting all the information contained in an MRI. There are several automatic segmentation techniques. Some of them use the information contained in the image histogram [ 6 – 11 ]. This way, since different contrast areas should correspond with different tissues, the image histogram can be used for partitioning the image. Nevertheless, variations on the contrast of the same tissue are found in an image due to RF noise or shading effects due to magnetic field variations, resulting in tissue misclassification. Other methods use statistical classifiers based on the expectation-maximization (EM) algorithms [ 12 – 14 ], maximum likelihood (ML) estimation [ 15 ], or Markov random fields [ 16 , 17 ]. Other segmentation techniques are based on artificial neural network classifiers [ 8 , 18 – 21 ] such as self-organizing maps (SOMs) [ 18 , 19 , 21 – 23 ]. In this paper we present three segmentation alternatives based on SOMs, which provide good results over the internet brain image repository (IBSR) [ 16 ] images. 2. SOM Algorithm SOM is an unsupervised classifier proposed by Kohonen and it has been used for a large number of applications regarding classification or modelling [ 24 ]. The self-organizing process is based on the distance (usually the Euclidean distance) computation among each training sample and all the units on the map as a part of a competitive learning process. On the other hand, several issues such as topological map, number of units on the map, initialization of weights, and the training process on the map are decisive for the classification quality. Regarding the topology, a 2D hexagonal grid was selected since it fitted better in the feature space as shown in the experiments. The SOM algorithm can be summarized as follows. Let 𝑋 ⊂ ℝ 𝑑 be the data manifold. In each iteration, the winning unit is computed according to 𝑈 𝜔 ( 𝑡 ) = a r g m i n 𝑖  ‖ ‖ 𝑥 ( 𝑡 ) − 𝜔 𝑖 ‖ ‖  , ( 𝑡 ) ( 1 ) where 𝑥 ( 𝑡 ) , 𝑥 ∈ 𝑋 , is the input vector at time 𝑡 and 𝜔 𝑖 ( 𝑡 ) is the prototype vector associated with the unit 𝑖 . The unit closer to the input vector 𝑈 𝜔 ( 𝑡 ) is referred to as winning unit and the associated prototype is updated. To complete the adaptive learning process on the SOM, the prototypes of the units in the neighborhood of the winning unit are also updated according to: 𝜔 𝑖 ( 𝑡 + 1 ) = 𝜔 𝑖 ( 𝑡 ) + 𝛼 ( 𝑡 ) ℎ 𝑈 𝑖  𝑥 ( 𝑡 ) ( 𝑡 ) − 𝜔 𝑖  ( 𝑡 ) , ( 2 ) where 𝛼 ( 𝑡 ) is the exponential decay learning factor and ℎ 𝑈 𝑖 ( 𝑡 ) is the neighborhood function associated with the unit 𝑖 . Both, the learning factor and the neighborhood function decay with time; thus the prototypes adaptation becomes slower as the neighborhood of the unit 𝑖 contains less number of units: ℎ 𝑈 𝑖 ( 𝑡 ) = 𝑒 ( − ‖ 𝑟 𝑈 − 𝑟 𝑖 ‖ 2 / 2 𝜎 ( 𝑡 ) 2 ) . ( 3 ) Equation ( 3 ) shows the neighbourhood function, where 𝑟 𝑖 represents the position on the output space and ‖ 𝑟 𝑈 − 𝑟 𝑖 ‖ is the distance between the winning unit and the unit 𝑖 on the output space. The neighbourhood is defined by a Gaussian function which shrinks in each iteration as shown in ( 4 ). In this competitive process, the winning unit is named the best matching unit (BMU). On the other hand, 𝜎 ( 𝑡 ) controls the reduction of the Gaussian neighborhood in each iteration. 𝜏 1 is a time constant which depends on the number of iterations and the map radius and computed as 𝜏 1 = number_of_iterations/map_radius: 𝜎 ( 𝑡 ) = 𝜎 0 𝑒 ( − 𝑡 / 𝜏 1 ) . ( 4 ) The quality of the trained map can be computed by the means of two measures. These two measures are the quantization error ( 𝑡 𝑒 ) , which determines the average distance between each data vector and its best matching unit (BMU) and the topological error ( 𝑞 𝑒 ) , which measures the proportion of all data vectors for which first and second BMUs are not adjacent units. Both, the quantization error and the topological error are defined by the following: 𝑡 𝑒 = 1 𝑁 𝑁  𝑖 = 1 𝑢  ⃗ 𝑥 𝑖  𝑞 , ( 5 ) 𝑒 = 𝑁  𝑖 = 1 ‖ ‖ ⃗ 𝑥 𝑖 −  𝑏 ⃗ 𝑥 𝑖 ‖ ‖ . ( 6 ) In ( 5 ), 𝑁 is the total number of data vectors, and 𝑢 ( ⃗ 𝑥 𝑖 ) is 1 if the first and the second BMU for ⃗ 𝑥 𝑖 are nonadjacent and 0 otherwise. In ( 6 ) the quantization error is defined where ⃗ 𝑥 𝑖 is the 𝑖 th data vector on the input space and 𝑏 ⃗ 𝜔 𝑖 is the weight (prototype) associated with the best matching unit for the data vector ⃗ 𝑥 𝑖 . Therefore, lower values of 𝑡 𝑒 and 𝑞 𝑒 imply a better topology preservation, which is equivalent to a better clustering result. That is to say, the lower the values on the quantization error ( 𝑞 𝑒 ) and the topological error ( 𝑡 𝑒 ) , the better the goodness of the SOM [ 25 , 26 ]. In this paper, SOM toolbox [ 27 ] has been used to implement SOM. 3. MR Image Segmentation with SOM In this section we present two image segmentation algorithms based on unsupervised SOM. The first uses the histogram to segment the whole volume (i.e., classify all the voxels on the volumetric image). The second extracts a set of features from each image slice and uses an SOM to classify the feature vectors into clusters using the devised entropy gradient clustering method. Thus, Figure 1 shows the block diagram of the presented segmentation algorithms. Figure 1: Block diagram of the segmentation methods. 3.1. Image Preprocessing Once the MR image has been acquired, a preprocessing is performed in order to remove noise and to homogenize the image background. The brain extraction for undesired structures removal (i.e., skull and scalp) can be done at this stage. There are several algorithms for this purpose such as brain surface extractor (BSE), brain extraction tool (BET) [ 8 ], Minneapolis consensus strip (McStrip), or hybrid watershed algorithm (HWA) [ 2 ]. Since IBSR 1.0 images have these undesired structures already removed, brain extraction is not required. Nevertheless, images provided by IBSR 2.0 are distributed without the scalp/skull already removed. In these images, the brain has been extracted in the preprocessing stage using BET. 3.2. Segmentation Using the Volume Image Histogram (HFS-SOM) The first step after preprocessing the image consists in computing the volume image histogram which describes the probability of occurrence of voxel intensities in the volume image and provides information regarding different tissues. A common approach to avoid processing the large number of voxels present on MR images consists in modelling the intensity values as a finite number of prototypes, which deals to improve the computational effectiveness. After computing the histogram, the bin 0 is removed since it contains all the background voxels. Thus, only information corresponding to the brain is stored. Figure 2 shows the rendered brain surface from the IBSR volume 12, and its histogram. Figure 2: Rendered brain surface extracted from IBSR_12 volume (a) and computed histogram (b). Histogram data including the intensity occurrence probabilities ( 𝑝 𝑖 ) and the relative position (bin number), 𝑏 𝑖 , are used to compose the feature vectors ⃗ 𝐹 = ( 𝑝 𝑖 , 𝑏 𝑖 ) , 𝑝 𝑖 ∈ ℝ , b 𝑖 ∈ ℤ , to be classified by the SOM. On a trained SOM, the output layer is composed by a reduced number of prototypes (the number of units on the output layer) modelling the input data manifold. In addition, the most similar prototypes are closely located in the output map at the time the most dissimilar are located apart. Nevertheless, since all the units have an associated prototype, it is necessary to cluster the SOMs in order to define the borders between clusters. In other words, each prototype is grouped so that it belongs to a cluster. Thus, the 𝑘 -means algorithm is used to cluster the SOMs, grouping the prototypes into a number of different classes, and the DBI [ 28 ], which gives lower values for better clustering results, is computed for different 𝑘 values to provide a measurement of the clustering validity. The clusters on the SOM group the units so that they belong to a specific class. As each of these units will be the BMU of a specific set of voxels, the clusters define different voxel classes. This way, each voxel is labeled as belonging to a class (i.e., segment). 3.3. MR Image Segmentation with SOM and the Entropy-Gradient Algorithm (EGS-SOM) The method described in this section is also based on SOM for voxel classification, but histogram information from the image volume is replaced by computing a set of features, selecting the most discriminant ones. After that, SOM clustering is performed by the EGS-SOM method described here in after, which allows us to obtain higher-resolution images providing good segmentation results as shown in the experiments. 3.3.1. Feature Extraction and Selection In this stage some significant features from the MR image are extracted to be subjected to classification. As commented before, we perform the image processing slice by slice on each plane. Thus, the feature extraction is carried out by using an overlapping and sliding window of 7 × 7 pixels on each slice of a specific plane. In the feature extraction process, window size plays an important role since smaller windows are not able to capture the second-order features, that is, texture information. The use of higher window sizes results in loosing resolution. Therefore the 7 × 7 size gives a good trade-off between complexity and performance. In this paper we use first- and second-order statistical features [ 29 ]. The first-order features we extract from the image are intensity, mean, and variance. The intensity is referred to the gray level of the center pixel on the window. The mean and variance are calculated taking into account the gray level present on the window. On the other hand, we additionally use second-order features such as textural features. Haralick et al. [ 30 ] proposed the use of 14 features for image classification, computed using the gray level coocurrence matrix (GLCM) method. The set of second-order features we have used are energy, entropy, contrast, angular second moment (ASM), sum average, autocorrelation, correlation, inverse difference moment, maximum probability, cluster prominence, cluster shade, dissimilarity, and second-order variance as well as moment invariants [ 31 ]. In order to select the most discriminant features, a genetic algorithm is used to minimize the topological and the quantization error on the SOM through the fitness function shown in ( 7 ) 𝐹 𝑄 𝑇 =  0 . 5 𝑞 𝑒 + 0 . 5 𝑡 𝑒  . ( 7 ) The feature selection process is summarized in Figure 3 . Figure 3: Feature selection process with genetic algorithm. The stop criterion is reached when the performance of the proposed solutions does not improve the performance significantly (1%) or the maximum number of generations is reached (500). Once the dimension of the feature space has been reduced, we use the vectors of this space for training a SOM. The topology of the map and the number of units on the map are decisive for the SOM quality. In that sense, we use a hexagonal grid since it allows better fitting the prototypes to the feature space vectors. Each BMU on the SOM has an associated pixel on the image. This association is made through a matrix computed during the feature extraction phase which stores the coordinates of the central pixel on each window. This allows associating a feature vector to an image pixel. Nevertheless, these clusters roughly define the different areas (segments) on the image, and a further fine-tuning phase is required. This fine-tuning phase is accomplished by the entropy-gradient method. Entropy-Gradient Method The procedure devised consists of using the feature vectors associated with each BMU to compute a similarity measurement among the vectors belonging with each BMU and the vectors associated to each other BMU. Next, the BMUs are sorted in ascending order of the contrast. Finally, the feature vectors of each BMU are included on a cluster. For each map unit, we compute the accumulated entropy: 𝐻 𝑚 𝑖 = 𝑁 𝑝  𝑛 = 1 𝐻 𝑛 , ( 8 ) where 𝑖 is the map unit index and 𝑁 𝑝 the number of pixels belonging to the map unit in the classification process. This means that the unit 𝑖 has a number of 𝑁 𝑝 -associated pixels. Since the output layer on the SOM is a two-dimensional space, we calculate the entropy-gradient vector from each map unit ( 8 ) and move to the opposite direction for clustering. 4. Results and Discussion In this section we show the segmentation results obtained using real MR brain images from two different sources. One of these sources is the IBSR database [ 32 ] in two versions, IBSR and IBSR 2.0. Figures 4(a) and 4(b) show the segmentation results for the IBSR volume 100_23 using the HFS-SOM algorithm and the EGS-SOM algorithm, respectively. In these images, WM, GM, and CSF are shown for slices 120, 130, 140, 150, 160, and 170 on the axial plane. Expert segmentation from IBSR database is shown in Figure 4(c) . Figure 4: Segmentation of the IBSR volume 100_23 using the HFS-SOM algorithm (a) and the EGS-SOM algorithm (b). Ground Truth is shown in (c). Slices 120, 130, 140, 150, 160, and 170 on the axial plane are shown on each column. First column corresponds to WM, second column to GM, and third column to CSF. Figure 5(a) shows the segmentation results for the IBSR 2.0 volume 12 using the fast volume segmentation algorithm. In this figure, each row corresponds to a tissue and each image column corresponds to a different slice. In the same way, Figure 5(b) shows the same slices of Figure 5(b) but the segmentation is performed using the EGS-SOM algorithm. Figure 5(c) shows the segmentation performed by expert radiologists provided by the IBSR database (ground truth). Figure 5: Segmentation of the IBSR 2.0 volume 12 using the HFS-SOM algorithm (a) and the EGS-SOM algorithm (b). Ground Truth is shown in (c). Slices 110, 120, 130, 140, 150, and 160 on the axial plane are shown on each column. First column corresponds to WM, second column to GM, and third column to CSF. Visual comparison between automatic segmentation and the ground truth points up that the EGS-SOM method outperforms the fast volume segmentation method. This fact is also stated in Figure 6 where Tanimoto’s index is shown for different segmentation algorithms, where SSOM corresponds to our entropy-gradient algorithm, BMAP is biased map [ 33 ], AMAP is adaptative map [ 33 ], MAP is maximum a posteriori probability [ 34 ], MLC is maximum likelihood [ 35 ], FUZZY is fuzzy k -means [ 36 ] and TSKMEANS is tree-structured k -means [ 36 ]. The performance of the presented segmentation techniques has been evaluated by computing the average overlap rate through Tanimoto’s index, as it has been widely used by other authors to compare the segmentation performance of their proposals [ 13 , 16 , 17 , 21 , 26 , 37 – 41 ]. Tanimoto’s index can be defined as 𝑇  𝑆 1 , 𝑆 2  = | | 𝑆 1 ∩ 𝑆 2 | | | | 𝑆 1 ∪ 𝑆 2 | | , ( 9 ) where 𝑆 1 is the segmentation set and 𝑆 2 is the ground truth. Figure 6: Average overlap metric comparison for different segmentation methods. 5. Conclusions In this paper we presented fully unsupervised segmentation methods for MR images based on hybrid artificial intelligence techniques for improving the feature extraction process and self-organizing maps for pixel classification. The use of a genetic algorithm provides a way for training the Self-Organizing map used as a classifier in the most efficient way. This is because the dimension of the training samples (feature vectors) has been reduced in order to be enough discriminant but not redundant. As a result, the number of units (neurons) on the map is also optimized as well as the classification process. Thus, we take advantage of the competitive learning model of the SOM which groups the pixels into clusters. This competitive process discovers similarities among the pixels, resulting in an unsupervised way to segment the image. Moreover, the clusters’ borders are redefined by using the entropy-gradient method presented on this paper. The whole process allows figuring out the segments present on the image without using any a priori information. The results shown in Section 4 have been compared with the segmentations provided by the IBSR database that outperform the results obtained by other algorithms such as k -means or fuzzy k -means. The number of segments or different tissues found in an MR image is figured out automatically making possible to find out tissues which could be identified with a pathology. Acknowledgment This work was partly supported by the Consejería de Innovación, Ciencia y Empresa (Junta de Andalucía, Spain) under the Excellence Projects TIC-02566 and TIC-4530. References I. A. Illán, J. M. Górriz, J. Ramírez et al., “18F-FDG PET imaging analysis for computer aided Alzheimer's diagnosis,” Information Sciences , vol. 181, no. 4, pp. 903–916, 2011. View at Publisher · View at Google Scholar · View at Scopus I. A. Illán, J. M. Górriz, M. M. López et al., “Computer aided diagnosis of Alzheimer's disease using component based SVM,” Applied Soft Computing Journal , vol. 11, no. 2, pp. 2376–2382, 2011. View at Publisher · View at Google Scholar · View at Scopus J. M. Górriz, F. Segovia, J. Ramírez, A. Lassl, and D. Salas-Gonzalez, “GMM based SPECT image classification for the diagnosis of Alzheimer's disease,” Applied Soft Computing Journal , vol. 11, no. 2, pp. 2313–2325, 2011. View at Publisher · View at Google Scholar · View at Scopus M. Kamber, R. Shinghal, D. L. Collins, G. S. Francis, and A. C. Evans, “Model-based 3-D segmentation of multiple sclerosis lesions in magnetic resonance brain images,” IEEE Transactions on Medical Imaging , vol. 14, no. 3, pp. 442–453, 1995. View at Publisher · View at Google Scholar · View at Scopus J. Ramírez, J. M. Górriz, D. Salas-Gonzalez, et al., “Computer-aided diagnosis of Alzheimer's type dementia combining support vector machines and discriminant set of features,” Information Sciences . In press. D. N. Kennedy, P. A. Filipek, and V. S. Caviness, “Anatomic segmentation and volumetric calculations in nuclear magnetic resonance imaging,” IEEE Transactions on Medical Imaging , vol. 8, no. 1, pp. 1–7, 1989. View at Publisher · View at Google Scholar · View at Scopus A. Khan, S. F. Tahir, A. Majid, and T. S. Choi, “Machine learning based adaptive watermark decoding in view of anticipated attack,” Pattern Recognition , vol. 41, no. 8, pp. 2594–2610, 2008. View at Publisher · View at Google Scholar · View at Scopus Z. Yang and J. Laaksonen, “Interactive retrieval in facial image database using self-organizing maps,” in Proceedings of the MVA , 2005. M. García-Sebastián, E. Fernández, M. Graña, and F. J. Torrealdea, “A parametric gradient descent MRI intensity inhomogeneity correction algorithm,” Pattern Recognition Letters , vol. 28, no. 13, pp. 1657–1666, 2007. View at Publisher · View at Google Scholar · View at Scopus E. Fernández, M. Graña, and J. R. Cabello, “Gradient based evolution strategy for parametric illumination correction,” Electronics Letters , vol. 40, no. 9, pp. 531–532, 2004. View at Publisher · View at Google Scholar · View at Scopus M. García-Sebastián, A. Isabel González, and M. Graña, “An adaptive field rule for non-parametric MRI intensity inhomogeneity estimation algorithm,” Neurocomputing , vol. 72, no. 16-18, pp. 3556–3569, 2009. View at Publisher · View at Google Scholar · View at Scopus T. Kapur, L. Grimson, W. M. Wells, and R. Kikinis, “Segmentation of brain tissue from magnetic resonance images,” Medical Image Analysis , vol. 1, no. 2, pp. 109–127, 1996. View at Scopus Y. F. Tsai, I. J. Chiang, Y. C. Lee, C. C. Liao, and K. L. Wang, “Automatic MRI meningioma segmentation using estimation maximization,” in Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society (IEEE-EMBS '05) , pp. 3074–3077, September 2005. View at Scopus J. Xie and H. T. Tsui, “Image segmentation based on maximum-likelihood estimation and optimum entropy-distribution (MLE-OED),” Pattern Recognition Letters , vol. 25, no. 10, pp. 1133–1141, 2004. View at Publisher · View at Google Scholar · View at Scopus Y. Zhang, M. Brady, and S. Smith, “Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm,” IEEE Transactions on Medical Imaging , vol. 20, no. 1, pp. 45–57, 2001. View at Publisher · View at Google Scholar · View at Scopus N. A. Mohamed, M. N. Ahmed, and A. Farag, “Modified fuzzy c-mean in medical image segmentation,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '99) , pp. 3429–3432, March 1999. View at Scopus W. M. Wells III, W. E. L. Crimson, R. Kikinis, and F. A. Jolesz, “Adaptive segmentation of mri data,” IEEE Transactions on Medical Imaging , vol. 15, no. 4, pp. 429–442, 1996. View at Scopus D. Tian and L. Fan, “A brain MR images segmentation method based on SOM neural network,” in Proceedings of the 1st International Conference on Bioinformatics and Biomedical Engineering (ICBBE '07) , pp. 686–689, July 2007. View at Publisher · View at Google Scholar · View at Scopus I. Güler, A. Demirhan, and R. Karakiş, “Interpretation of MR images using self-organizing maps and knowledge-based expert systems,” Digital Signal Processing , vol. 19, no. 4, pp. 668–677, 2009. View at Publisher · View at Google Scholar · View at Scopus P. K. Sahoo, S. Soltani, and A. K. C. Wong, “A survey of thresholding techniques,” Computer Vision, Graphics and Image Processing , vol. 41, no. 2, pp. 233–260, 1988. View at Scopus W. Sun, “Segmentation method of MRI using fuzzy Gaussian basis neural network,” Neural Information Processing , vol. 8, no. 2, pp. 19–24, 2005. J. Alirezaie, M. E. Jernigan, and C. Nahmias, “Automatic segmentation of cerebral MR images using artificial neural networks,” IEEE Transactions on Nuclear Science , vol. 45, no. 4, pp. 2174–2182, 1998. View at Scopus A. Ortiz, J. M. Górriz, J. Ramírez, and D. Salas-Gonzalez, “MR brain image segmentation by hierarchical growing SOM and probability clustering,” Electronics Letters , vol. 47, no. 10, pp. 585–586, 2011. T. Kohonen, Self-Organizing Maps , Springer, 2001. E. Arsuaga and F. Díaz, “Topology preservation in SOM,” International Journal of Mathematical and Computer Sciences , vol. 1, no. 1, pp. 19–22, 2005. K. Taşdemir and E. Merényi, “Exploiting data topology in visualization and clustering of self-organizing maps,” IEEE Transactions on Neural Networks , vol. 20, no. 4, pp. 549–562, 2009. View at Publisher · View at Google Scholar · View at Scopus E. Alhoniemi, J. Himberg, J. Parhankagas, and J. Vesanta, “SOM Toolbox for Matlab v2.0,” 2005, http://www.cis.hut.fi/projects/somtoolbox . M. O. Stitson, J. A. E. Weston, A. Gammerman, V. Vork, and V. Vapnik, “Theory of support vector machines,” Tech. Rep. CSD-TR-96-17, Department of Computer Science, Royal Holloway College, University of London, 1996. M. Nixson and A. Aguado, Feature Extraction and Image Processing , Academic Press, 2008. R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics , vol. 3, no. 6, pp. 610–621, 1973. View at Scopus M. Hu, “Visual pattern recognition by moments invariants,” IRE Transactions on Information Theory , vol. 8, pp. 179–187, 1962. Internet Brain Database Repository, Massachusetts General Hospital, Center for Morphometric Analysis, 2010, http://www.cma.mgh.harvard.edu/ibsr/data.html . J. C. Rajapakse and F. Kruggel, “Segmentation of MR images with intensity inhomogeneities,” Image and Vision Computing , vol. 16, no. 3, pp. 165–180, 1998. View at Scopus J. L. Marroquin, B. C. Vemuri, S. Botello, F. Calderon, and A. Fernandez-Bouzas, “An accurate and efficient Bayesian method for automatic segmentation of brain MRI,” IEEE Transactions on Medical Imaging , vol. 21, no. 8, pp. 934–945, 2002. View at Publisher · View at Google Scholar · View at Scopus J. C. Bezdek, L. O. Hall, and L. P. Clarke, “Review of MR image segmentation techniques using pattern recognition,” Medical Physics , vol. 20, no. 4, pp. 1033–1048, 1993. View at Publisher · View at Google Scholar · View at Scopus L. P. Clarke, R. P. Velthuizen, M. A. Camacho et al., “MRI segmentation: methods and applications,” Magnetic Resonance Imaging , vol. 13, no. 3, pp. 343–368, 1995. View at Publisher · View at Google Scholar · View at Scopus C. T. Su and H. C. Lin, “Applying electromagnetism-like mechanism for feature selection,” Information Sciences , vol. 181, no. 5, pp. 972–986, 2011. View at Publisher · View at Google Scholar · View at Scopus K. Tan, E. Khor, and T. Lee, Multiobjective Evolutionary and Applications , Springer, 1st edition, 2005. T. Tasdizen, S. P. Awate, R. T. Whitaker, and N. L. Foster, “MRI tissue classification with neighborhood statistics: a nonparametric, entropy-minimizing approach,” in Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI '05) , 2005. I. Usman and A. Khan, “BCH coding and intelligent watermark embedding: employing both frequency and strength selection,” Applied Soft Computing Journal , vol. 10, no. 1, pp. 332–343, 2010. View at Publisher · View at Google Scholar · View at Scopus Y. Wang, T. Adali, S. Y. Kung, and Z. Szabo, “Quantification and segmentation of brain tissues from MR images: a probabilistic neural network approach,” IEEE Transactions on Image Processing , vol. 7, no. 8, pp. 1165–1181, 1998. View at Scopus var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-8578054-2']); _gaq.push(['_setDomainName', 'hindawi.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })();

Journal

Advances in Artificial Neural SystemsHindawi Publishing Corporation

Published: Jun 7, 2012

There are no references for this article.