Access the full text.

Sign up today, get DeepDyve free for 14 days.

International Journal of Plant Genomics
, Volume 2011 (2011) – Dec 20, 2011

/lp/hindawi-publishing-corporation/how-to-group-genes-according-to-expression-profiles-cy1VudiqDl

- Publisher
- Hindawi Publishing Corporation
- Copyright
- Copyright © 2011 Julio A. Di Rienzo et al.
- ISSN
- 1687-5370
- eISSN
- 1687-5389
- Publisher site
- See Article on Publisher Site

How to Group Genes according to Expression Profiles? //// Hindawi Publishing Corporation Home Journals About Us About this Journal Submit a Manuscript Table of Contents Journal Menu Abstracting and Indexing Aims and Scope Article Processing Charges Articles in Press Author Guidelines Bibliographic Information Contact Information Editorial Board Editorial Workflow Free eTOC Alerts Reviewers Acknowledgment Subscription Information Open Special Issues Published Special Issues Special Issue Guidelines Abstract Full-Text PDF Full-Text HTML Full-Text ePUB Linked References How to Cite this Article International Journal of Plant Genomics Volume 2011 (2011), Article ID 261975, 10 pages doi:10.1155/2011/261975 Methodology Report <h2>How to Group Genes according to Expression Profiles?</h2> Julio A. Di Rienzo , 1 Silvia G. Valdano , 2 and Paula Fernández 3 1 Estadistica y Biometría, Universidad Nacional de Córdoba, 5000 Córdoba, Argentina 2 Departamento de Ciencias Naturales, Universidad Nacional de Río Cuarto, 5800 Río Cuarto, Argentina 3 Instituto de Biotecnología, INTA-Castelar, 1712 Castelar, Argentina Received 27 June 2011; Revised 7 October 2011; Accepted 3 November 2011 Academic Editor: Manuel Talon Copyright © 2011 Julio A. Di Rienzo et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract The most commonly applied strategies for identifying genes with a common response profile are based on clustering algorithms. These methods have no explicit rules to define the appropriate number of groups of genes. Usually the number of clusters is decided on heuristic criteria or through the application of different methods proposed to assess the number of clusters in a data set. The purpose of this paper is to compare the performance of seven of these techniques, including traditional ones, and some recently proposed. All of them produce underestimations of the true number of clusters. However, within this limitation, the gDGC algorithm appears to be the best. It is the only one that explicitly states a rule for cutting a dendrogram on the basis of a testing hypothesis framework, allowing the user to calibrate the sensitivity, adjusting the significance level. 1. Introduction One of the main purposes of microarray experiments is to discover genes having differential expression level among a set a treatment conditions. Once the set of “candidates” genes is obtained, the problem of identifying those having a common response profile across experimental conditions remains open [ 1 – 3 ]. There are many strategies to proceed with. One of them is the exploration of gene’s ontology; the others—and more commonly applied—are based on unsupervised classification algorithms (cluster analysis). The main purpose of clustering techniques is to arrange a number of instances to produce meaningful grouping of them. Hierarchical clustering methods not only allow to group genes but also to trace their relationships. The outcome of hierarchical methods is displayed as a binary tree called dendrogram. A key point for interpreting a dendrogram is to decide where to cut it. This decision is equivalent to determine the number of clusters in the dataset. The problem is to realize which instances belong to different groups and which seem to be different just as the result of sampling errors. Several general-purpose methods have been proposed to estimate the optimal number of clusters in a dataset. The most popular are those introduced by Calinski and Harabasz [ 4 ], Hartigan [ 5 ], Sarle [ 6 ], and Kaufman and Rousseeuw [ 7 ]. Tibshirani et al. [ 8 ] proposed the Gap statistic as a method for assessing the number of clusters in a dataset. It compares the log of the within-cluster sum of squares against its expected value under a suitable null distribution. Authors exemplified its application to the discovery of groups in a hierarchical clustering of genes of a microarray experiment. Another method, developed in the framework of large-scale gene-expression studies, is the algorithm called Hierarchical Ordered Partitioning and Collapsing Hybrid ( HOPACH ) [ 2 ]. It is a hierarchical tree of clusters and was developed for the purpose of discovering patterns within a hierarchical structure. A different approach to the problem of grouping is the visualization of data as a sample of a mixture of populations. The classification is the result of a mixture model estimation and selection that takes into account not only different number of populations present in the sample and their location parameters but also the dispersion and correlation parameters. A procedure that is representative of this kind of methods is the algorithm MClust of Fraley and Raftery [ 9 , 10 ], which is based on the modeling of a multivariate normal mixture. Every method mentioned previously assumes that each instance is represented by a 𝑝 -variate vector of attributes. In microarray experiments genes represent the instances and their expressions, observed in the contrasting experimental conditions, the 𝑝 -variate vector of attributes. In this type of experiments there are, usually, several biological replicates for each experimental condition. None of the methods mentioned above makes an explicit use of those replicates. Valdano and Di Rienzo [ 11 ] proposed a multivariate generalization ( gDGC ) of a univariate pairwise comparison procedure [ 12 ] which uses replicates to estimate the cutting point of a dendrogram generated by a given linkage algorithm. In this way the procedure generates a partition within a hierarchical structure which is a nice property in the framework of microarray experiments data analysis. Considering the revisions of Tibshirani et al. [ 8 ], Lee et al . [ 1 ], Pollard and van der Laan [ 2 ], and Gentleman and Carey [ 3 ], regarding the problem of assessing the number of clusters in a dataset, we focused on the comparison of the last four methods mentioned before for estimating the number of clusters in a gene-expression matrix. However, we included general-purpose methods as reference. The comparison was done under a set of scenarios described in following sections. 2. Methods Let the dataset { 𝑥 𝑖 𝑗 } , 𝑖 = 1 , … , 𝑛 , 𝑗 = 1 , … , 𝑝 consist of 𝑝 features measured on 𝑛 -independent observations; that is, 𝐗 is a nxp data matrix. Suppose that we have grouped the data into 𝑘 clusters. Let 𝐙 be a cluster indicator matrix ( 𝑧 𝑖 𝑟 = 1 if the 𝑖 th observation belongs to the 𝑟 th cluster, else 𝑧 𝑖 𝑟 = 0 , 𝑖 = 1 , … , 𝑛 ; 𝑟 = 1 , … , 𝑘 ) and 𝐂 a 𝑘 × 𝑝 matrix of cluster means: 𝐙 𝐂 = 𝐙 − 𝟏 𝐙 𝐗 . ( 1 ) Then, the pooled within-cluster sum of squares matrix is 𝐖 = ( 𝐗 − 𝐙 𝐂 ) ( 𝐗 − 𝐙 𝐂 ) , ( 2 ) and the between-cluster sum of squares matrix is 𝐁 = 𝐂 𝐙 𝐙 𝐂 . ( 3 ) The within-cluster and the between-cluster sums of squares pooled over variables, for a given number 𝑘 of clusters, are, respectively, 𝑊 ( 𝑘 ) = t r a c e ( 𝐖 ) , 𝐵 ( 𝑘 ) = t r a c e ( 𝐁 ) . ( 4 ) The following methods, proposed to estimate the optimum number of clusters in a dataset, are compared. They are identified by the name of the algorithm which implements them or by the initials of their authors. CH. Calinski and Harabasz [ 4 ] based the selection of the number of clusters on the maximization of the between/within-cluster sums of squares ratio. The criterion to choose 𝑘 is the one that maximizes CH ( 𝑘 ) : C H ( 𝑘 ) = 𝐵 ( 𝑘 ) / ( 𝑘 − 1 ) 𝑊 ( 𝑘 ) / ( 𝑛 − 𝑘 ) . ( 5 ) H. Hartigan [ 5 ] used the ratio between the within-cluster sums of squares of 𝑘 and ( 𝑘 + 1 ) clusters suggesting the selection of 𝑘 ≥ 1 as the minimum 𝑘 for which the ratio is lesser than 10: [ ] H ( 𝑘 ) = ( 𝑊 ( 𝑘 ) / 𝑊 ( 𝑘 + 1 ) ) − 1 ( 𝑛 − 𝑘 − 1 ) . ( 6 ) CCC. Sarle [ 6 ] introduced the cubic clustering criterion based on the scaled 𝑙 𝑜 𝑔 [ 1 − 𝐸 ( 𝑅 2 ) / ( 1 − 𝑅 2 ) ] , where 𝑅 2 is the proportion of variance accounted by the clusters and 𝐸 ( 𝑅 2 ) is the expected value of observed 𝑅 2 assuming that the data are uniformly distributed on a hypercube: 𝑅 C C C = l o g 1 − 𝐸 2 1 − 𝑅 2 ⋅ √ 𝑛 𝑝 ∗ / 2 𝑅 0 . 0 0 1 + 𝐸 2 1 . 2 . ( 7 ) 𝑝 ∗ is the dimensionality of the between-cluster variation. The criterion to choose the optimum number of clusters is to select the number that maximizes CCC. Maximum value of the CCC index lesser than 2 indicates that there is no evidence of the existence of clusters in the dataset. CCCm. We also included a modified version of CCC for which the expected value of 𝑅 2 is calculated from the null distribution described as a uniform distribution over a box aligned with the principal components of the data as was proposed by Tibshirani et al. [ 8 ] for the Gap statistic. Silh. Kaufman and Rousseeuw [ 7 ] introduced the Silhouette statistic, a measurement, calculated for each observation, based on a standardized difference between the average distance of the 𝑖 th observation to each other in the same cluster 𝑎 ( 𝑖 ) and the average distance to the observations in the nearest cluster 𝑏 ( 𝑖 ) : 𝑠 ( 𝑖 ) = 𝑏 ( 𝑖 ) − 𝑎 ( 𝑖 ) m a x ( 𝑎 ( 𝑖 ) , 𝑏 ( 𝑖 ) ) . ( 8 ) They proposed to choose the optimal number of clusters as the value maximizing the average silhouette . It is implemented in the function silcheck in the hopach 𝑅 -package [ 13 ]. Gap. Tibshirani et al. [ 8 ] used the gap statistic for estimating the number of clusters in a dataset. The gap statistic compares the log of the within-cluster sum of squares against its expected value under a suitable null distribution of the dataset: g a p ( 𝑘 ) = 𝐸 l o g ( 𝑊 ( 𝑘 ) ) − l o g ( 𝑊 ( 𝑘 ) ) . ( 9 ) The criterion to select the number 𝑘 of clusters is, in this case, the lesser 𝑘 such that g a p ( 𝑘 ) ≥ g a p ( 𝑘 + 1 ) − 𝑠 ( 𝑘 + 1 ) , where 𝑠 ( 𝑘 + 1 ) is the standard deviation of a prediction of gap when the number of clusters is ( 𝑘 + 1 ) . For the calculation of 𝐸 [ l o g ( 𝑊 ( 𝑘 ) ) ] , Tibshirani et al. [ 8 ] generate a null distribution from a uniform distribution over a box aligned with the principal components of the data. The authors argued that this way of generating the data takes into account the shape of the distribution of the original observations and makes the procedure rotationally invariant, as long as the clustering method itself is invariant. HOPACH. Pollard and van der Laan [ 2 ] proposed another application of the Silhouette statistics: the Hierarchical Ordered Partitioning and Collapsing Hybrid (HOPACH) procedure which iteratively applies a partitioning algorithm to produce a hierarchical tree of clusters. It is implemented in the hopach function of the hopach 𝑅 -package [ 13 ]. At each node, a cluster is partitioned into two or more smaller clusters and, before the next partitioning step, any similar clusters are merged. The algorithm estimates the optimal number of clusters based on median split silhouette criterion. The function can be called passing to it a data frame or a distance matrix. We try both: passing the data frame of the average gene-expressions ( HOPACHc ) and the Mahalanobis distances matrix ( HOPACHm ). All other arguments were left to their default settings. MClust. Fraley and Raftery [ 9 , 10 ] proposed a method of clustering that is based on the assumption that the dataset is a sample of a multivariate normal mixture. The method fits a number of models for a number of populations differing not only in the location parameter but also in the variance-covariance matrix within a set of plausible simplified correlation structure. The model selection rule is based on the Bayesian Information Criterion. As part of the output of this method, the estimated number of clusters in the data is obtained. The input to this method is the matrix of average expression level for each gene (rows) on the different experimental conditions (columns). The routine is already implemented in 𝑅 ( mclust 𝑅 -package). Within our simulation, it was called without any additional arguments except the data frame. gDGC. Valdano and Di Rienzo [ 11 ] calculated a cutting point for a dendrogram, generated by a given linkage algorithm, based on the null distribution of the root node of the binary tree produced by the clustering procedure. The node in which two mean vectors—or a cluster of them—join have an associated measure that corresponds to the distance—calculated according to linkage algorithm—between the mean vectors or the clusters that the node is joining. The node in which all mean vectors join, to form a unique cluster, is the root node . In the UPGMA algorithm, if 𝑆 𝑀 and 𝑆 𝐿 are two different clusters, the distance between them is defined as follow: 𝑆 𝑞 = 𝑞 𝑀 , 𝑆 𝐿 = 1 # 𝑆 𝑀 # 𝑆 𝐿 𝐲 𝑖 ∈ 𝑆 𝑀 𝐲 𝑗 ∈ 𝑆 𝐿 𝐷 𝑖 𝑗 , ( 1 0 ) where 𝐷 𝑖 𝑗 is the square root of Mahalanobis distance. If 𝑆 𝑀 and 𝑆 𝐿 are coincident, then 𝑞 ( 𝑆 𝑀 , 𝑆 𝑀 ) = 0 . The smallest value of 𝐷 𝑖 𝑗 will correspond to the pair of most similar mean vectors and the node that is formed will be at a distance 𝑞 1 from the origin. The following distance— 𝑞 2 —is associated with the next node, which can join two different mean vectors or the cluster previously formed and another mean vector. At the end of the clustering algorithm, the last union will be at distance 𝑞 𝑘 − 1 and will be referred to as the distance to the root node (Figure 1 ). This distance can be seen as a realization of a random variable 𝑄 . The (1− α )-quantile of its distribution under the null hypothesis of equal population mean vectors can be used to construct a test of size α . Given 𝑄 1 − 𝛼 , as the α -level critical value, all 𝑄 ≥ 𝑄 1 − 𝛼 will lead to the rejection of the null hypothesis. An 𝑅 routine that calculates critical points of the null distribution of Q is freely available to download at: http://agro.uncor.edu/~estad/gDGCQ.r . A friendly implementation of gDGC for its application on a gene-expression matrix can be found in the free-software fgStatistics http://sites.google.com/site/fgstatistics/ . Figure 1: Dendrogram showing the relationships among mean vectors. Cut-off criterion obtained with the gDGC test— 𝑄 1 − 𝛼 —is indicated with a dotted line. At the bottom of the figure, different letters identify groups statistically differing in the population centroids at a significance level 𝛼 . 3. Simulated Data The primary output of a microarray experiment is the gene-expression matrix ( GEM ). It is composed by 𝐺 rows and 𝐻 columns. 𝐺 is the number of “genes” evaluated and 𝐻 is the number of microarrays (treatments × replicates) used in the experiment. Usually 𝐺 is bigger than 𝐻 and varies between hundreds to tens of thousands. Candidate genes are those genes that are differentially expressed among “treatments”. The set of candidate genes is smaller than original set of genes and its size is around tens to hundreds of genes. This drastic reduction in the number of genes relays in the assumption that most of them remain unchanged under the experimental conditions contrasted. To simulate the candidate genes expression matrices we considered two scenarios regarding the number of differentially expressed genes (100 and 300 genes), two levels for the number of clusters which have similar profile among treatments: 2 and 10, two levels for the number of treatment conditions: 3 and 5, and two levels for the number of replicates: 3 and 6. Anumber of genes, clusters, treatments, and replicates do not intend to cover all possibilities but common cases in microarray experiments. According to the number of differentially expressed genes (2), the number of clusters (2), the number of treatments (2), and then number of replicates (2), 16 scenarios were considered, For each scenario 10 simulated candidate-gene-expression matrixes (sGEM) were randomly generated. Each sGEM was generated from the GEM of a self-self cDNA-microarray experiment dataset [ 14 ] according to Algorithm 1 described in the appendix. The algorithm relays on the availability of a residual gene-expression matrix (rGEM). This residual matrix was obtained from the GEM of the self-self experiment (s-sGEM) by centering by rows and columns and adding to each entry the mean of all entries. This way of obtaining an rGEM assumes that each entry ( 𝑌 𝑖 𝑗 ) in s-sGEM can be modeled as 𝑌 𝑖 𝑗 = 𝜇 + 𝑔 𝑖 + 𝑚 𝑗 + 𝜀 𝑖 𝑘 , 𝑖 = 1 … 𝐺 , 𝑗 = 1 … 𝐻 , where 𝜇 is a common mean, 𝑔 𝑖 is the effect of the 𝑖 th gene, 𝑚 𝑗 is the 𝑗 th microarray’s effect, and 𝜀 𝑖 𝑗 is a random error with zero mean. The resulting rGEM was a 3830 (rows) by 10 (columns) matrix and is available at the following link: https://docs.google.com/leaf?id=0BxMg4dIPlsq7MzhhMGNjNzMtNGUwYS00NmYzLWI0NDctZjZlNTFiYTEzYWZm&hl=es . Clusters were generated by randomly allocating the number of genes belonging to each cluster based on Algorithm 2 described in the appendix. A randomly generated profile of treatment effects—scaled by the common within standard deviation of each gene—was added for every gene in the same cluster. The nonscaled profile was generated uniformly between −3 and 3. In this way, differences among treatment means ranged between −3 and 3 times the common within standard deviation for a given gene. To summarize the effect of the methods to assessing the optimum number of clusters, a linear model was fitted to the difference between the number of clusters in the dataset and its estimation. Hereafter, we will refer to this difference as the bias . The factors included in the model were the following: the method used to estimate the number of clusters ( 𝑀 ), the true number of clusters ( 𝑘 ), the number of genes ( 𝐺 ), the number of treatments ( 𝑇 ), and the number of replicates ( 𝑁 ). Because each method was applied to the same simulated data, a dataset effect was included in the model as a random effect. Due to the number of terms involved in the adjusted model—main effects and their interaction—the Benjamini-Hochberg algorithm [ 15 ] was applied to adjust the raw 𝑃 value in order to control the false discovery rate. The significance level was 0.05. For the significant terms of the model, confidence intervals were calculated for their marginal means. The mixed model was fitted using the lme function ( nlme 𝑅 -package). 4. Results All the methods compared in this study (except MClust ) can be applied to the same distance matrix used by the clustering algorithm. Because gDGC method uses the Mahalanobis distance to measure the dissimilarity between mean vectors (genes), we decided to base our comparison using this matrix. Mahalanobis distance is a nice metric because it takes into account variances and covariances of attributes. The covariance matrix used to calculate the Mahalanobis distance is the common—pooled—within gene covariance matrix. Different linkage algorithms are separately analyzed. First we present results for average linkage, then the results for complete linkage. Ward’s algorithm was also included in the comparison but results are not shown because of its poor performance. Although MClust does not depend on the linkage algorithm, it will appear in the comparison under the subtitles Average linkage and Complete linkage . 4.1. Average Linkage Table 1 summarizes the ANOVA table for the fitted model when the true number of clusters ( 𝑘 ) in the dataset is 2 and 10. In both cases, the best model included a variance function to take into account that residual variance was much greater for HOPACH than for the other procedures. The residual standard deviation of HOPACHm and HOPACHc was around 10 ( 𝑘 = 2 ) to 12 ( 𝑘 = 1 0 ) times the common standard deviation of the other procedures. Table 1: Summarized ANOVA table for the terms of the linear model fitted to the bias (estimated minus true number of clusters— 𝑘 —in the gene-expression matrix). Results are shown for 𝑘 = 2 and 𝑘 = 1 0 . Clustering algorithm: average linkage. Table 1 shows evidence of differences in the mean bias among methods compared. These differences do not depend on other factors when 𝑘 = 2 . Table 2 summarizes the performance of the methods when 𝑘 = 2 . It shows that no matter the input used to the HOPACH method (Mahalanobis distances matrix or the mean-GEM), it produces the highest bias, about seven clusters above the true value. On the other hand, only CCCm , CCC , gDGC , and MClust had confidence intervals compatible with the unbiasedness hypothesis. Table 2: Estimated mean, standard error, and lower (LB) and upper boundaries (UB) of a 95% confidence interval for the bias for each method applied to the estimation of the number of clusters in the simulated datasets. True number of clusters: 𝑘 = 2 . Clustering algorithm: average linkage. When there is considered the case of moderate number of clusters ( 𝑘 = 1 0 ), the performance of the methods depends on the number of treatments. Table 3 shows the mean bias by method, grouped according to the number of treatments. The rank of the methods is almost the same when 𝑇 = 3 or 𝑇 = 5 . HOPACH overestimated whereas all other methods underestimated the number of clusters. However, as the number of treatments increases ( 𝑇 = 5 ), a differentiation in favour to gDGC and MClust is apparent. Table 3: Estimated mean, standard error, and lower (LB) and upper boundaries (UB) of a 95% confidence interval for the bias for each combination of method ( 𝑀 ) and number of treatments ( 𝑇 ). Means of bias are sorted descending within each level of 𝑇 . True number of clusters: 𝑘 = 1 0 . Clustering algorithm: average linkage. 4.2. Complete Linkage Table 4 summarizes the ANOVA table for the fitted model when the true number of clusters ( 𝑘 ) in the dataset is 2 and 10. In both cases the best model included a variance function to take into account that residual variance was much greater for HOPACH than for the other procedures. The residual standard deviation of HOPACHm and HOPACHc was around 10 ( 𝑘 = 2 ) to 13 ( 𝑘 = 1 0 ) times the common standard deviation of the other procedures. Table 4: Summarized ANOVA table for the terms of the linear model fitted to the bias (estimated minus true number of clusters— 𝑘 —in the gene-expression matrix). Results are shown for 𝑘 = 2 and 𝑘 = 1 0 . Clustering algorithm: complete linkage. When the true number of clusters in the dataset was 2, the highest interaction terms including method were 𝑀 ∶ 𝐺 and 𝑀 ∶ 𝑇 . The mean and 95% confidence interval for the bias for each combination of method and number of genes and of method and number of treatments are shown in Tables 5 and 6 , respectively. Table 5: Estimated means, standard error, and lower (LB) and upper boundaries (UB) of a 95% confidence interval for the bias for each combination of method ( 𝑀 ) and number of genes ( 𝐺 ). The table is sorted in descending order of bias within each level of 𝐺 . True number of clusters: 𝑘 = 2 . Clustering algorithm: complete linkage. Table 6: Estimated mean, standard error, and lower (LB) and upper boundaries (UB) of a 95% confidence interval for the bias of each combination of method ( 𝑀 ) and number of treatments ( 𝑇 ). The table is sorted in descending order of bias within each level of 𝑇 . True number of clusters: 𝑘 = 2 . Clustering algorithm: complete linkage. As a general remark the increase in 𝐺 is followed by a decrease in the bias. However there are important differences within methods depending on 𝐺 . For 𝐺 = 1 0 0 , methods which produced estimates compatible with the unbiasedness hypothesis were CCC , CCCm , gDGC , and MClust . For 𝐺 = 3 0 0 , those methods were HOPACHc , CH , gDGC , Silh , CCC , CCCm , and MClust . Considering results shown in Table 6 , the increase in the number of treatments is followed by a decrease in bias. As in the previous case there are differences in the performance of the methods depending on 𝑇 . However, no matter 𝑇 , HOPACH always overestimated the number of clusters. In the side of best performing methods the list contains Silh , CCC , CCCm , gDGC , and MClust . When 𝑇 = 5 , the previous list is augmented with CH . When the true number of clusters in the dataset was 10, the highest interaction term including method was 𝑀 ∶ 𝐺 ∶ 𝑇 , which logically includes the also significant 𝑀 ∶ 𝐺 and 𝑀 ∶ 𝑇 interaction terms. There is also a second-order significant interaction given by 𝑀 ∶ 𝑁 . The mean and 95% confidence interval for the bias for the combinations of method and number of replicates are shown in Table 7 . The corresponding table for combinations of method, number of genes, and number of treatments is shown in Table 8 . Table 7: Estimated mean, standard error, and lower (LB) and upper boundaries (UB) of a 95% confidence interval for the bias of each combination of method ( 𝑀 ) and number of replicates ( 𝑁 ). The table is sorted in descending order of bias within each level of 𝑁 . True number of clusters: 𝑘 = 1 0 . Clustering algorithm: complete linkage. Table 8: Estimated mean, standard error, and lower (LB) and upper boundaries (UB) of a 95% confidence interval for the bias of each combination of method ( 𝑀 ), number of genes ( 𝐺 ), and number of treatments ( 𝑇 ). The table is sorted in descending order of bias within each level of 𝐺 and 𝑇 . True number of clusters: 𝑘 = 1 0 . Clustering algorithm: complete linkage. As a general remark, when the true number of clusters increases to a moderate number ( 𝑘 = 1 0 ), all methods underestimated the number of clusters, except HOPACH , which consistently overestimated it. Although a significant interaction was found for the method and the number of replicates, Table 7 shows that there is no change in the ordering of the methods no matter 𝑁 . Gap , 𝐻 , gDGC, and MClust were the lesser negative-biased methods. To analyze the performance of the methods to estimate the number of clusters regarding the 𝑀 ∶ 𝐺 ∶ 𝑇 interaction term, Table 8 is divided into four blocks defined by the combination levels of 𝐺 and 𝑇 . Within these blocks methods were sorted in descending order of bias. Taking into account the bias in the four blocks there are four methods that always have the lesser bias: Gap , H , gDGC , and MClust . Although their order changes in each block, the picture is the same. As in other cases analyzed, HOPACH always overestimated, by far, the number of clusters in the dataset. 5. Discussion Two scenarios were considered in this work: when the true number of clusters is very small and when the number is moderate. In the first case (i.e., 𝑘 = 2 ), some methods estimated the true number of clusters quite well, no matter the linkage algorithm. These methods were CCC , CCCm , gDGC , and MClust . Meanwhile, all other methods produced overestimate. Within this group of methods, HOPACH (based on the Mahalanobis distance or the average gene-expression matrix) was, by far, the highest biased. The case when the number of clusters is small is not the most challenging situation because most of clustering methods find the global structure. Moreover, in most microarray experiments the number of clusters will be greater than two. The problem is finding relatively small clusters in the presence of one or more larger clusters [ 2 ]. For moderate number of clusters, as could be 10, all methods gave negative-biased estimations of the number of clusters, except HOPACHm and HOPACHc that were positive biased and very variable. The positive bias of HOPACH is consistent with the properties of the median split Silhouette criterion (MSS), which was developed to be more “aggressive" for finding small, homogeneous clusters in large datasets [ 13 ]. Within the negative-biased methods, and according to the simulated scenarios, the results for the average and complete linkage algorithms suggest that the less-biased methods for assessing the number of clusters were MClust , gDGC, and Gap . However, considering all the scenarios there are two methods that consistently appeared in the best groups: gDGC and MClust . One disadvantage of gDGC compared to MClust is that it relays on the availability of replicates. However, in actual microarray applications there are always biological replicates. So, in this context, that limitation is not a problem. Although gDGC is based on the null distribution of the root node of a binary tree, generated by a hierarchical clustering algorithm, and MClust is based on the modeling of a multivariate normal mixture, both are theoretically related. Their null model is that there is just one multivariate-normal population. For this reason, both can give, as a result, one cluster. When the null model fails, MClust assumes that the dataset is a mixture of samples from several multivariate-normal populations differing in their mean vector and possibly in their covariance matrix, with the number of populations being a parameter to estimate. gDGC also assumes that if the dataset is not a sample of a unique multivariate population, then it is a mixture of samples from several multivariate-normal populations. In contrast to Mclust , gDGC makes the simplified assumption that there is a common covariance matrix as in MANOVA. Nonetheless, an advantage of gDGC is that it can drop the assumption of multivariate normality and resample from an empirical estimated null distribution with, of course, additional computational cost. Another point in favour to gDGC is that it is related to a dendrogram, a common way to illustrate relationships among genes (i.e., heatmaps). So, gDGC not only estimates the number of groups of genes having the same expression profile but also shows them using the intuitive idea of cutting a dendrogram, making its interpretation straightforward. Because the rule to cut the dendrogram is based of a testing hypothesis framework, it allows the user to calibrate the power of the test selecting the significance level of his/her choice. gDGC and MClust are computer intensive methods, and the users will have to face the time cost of their implementations. However, it is possible—for common setups of the number of genes, replicates, number of treatments, and linkage algorithm—to speed up the gDGC algorithm having already calculated the appropriate percentile tables of the null distribution of its decision statistic. In summary, there are not unbiased methods for estimating the number of clusters in a gene-expression matrix within those methods compared in this study. However, within the negative-biased methods, MClust and gDGC are the best choice. Appendix Algorithm A1. For the generation of a simulated gene-expression matrix (sGEM), one has the following. (1) Initialize groups = number of treatments, genes = number of genes, replicates = number of replicates, and set index 𝑔 , 𝑟 , and 𝑖 to 0. (2) Let 𝑖 = 𝑖 + 1 . (3) Randomly choose the index 𝑘 that points to a row of the rGEM (residual matrix of gene expressions). (4) Initialize 𝑗 = 0 . (5) Let 𝑔 = 𝑔 + 1 . (6) Let 𝑟 = 𝑟 + 1 . (7) Randomly choose the index m that points to a column of rGEM. (8) Set 𝑗 = 𝑗 + 1 that points to the columns of the sGEM (simulated matrix of gene-expression). (9) Let s G E M [ 𝑖 , 𝑗 ] = r G E M [ 𝑘 , 𝑚 ] × 𝑠 [ 𝑘 ] + 𝑚 [ 𝑘 , 1 ] + 𝐸 [ 𝑖 , 𝑔 ] × 𝑠 [ 𝑘 ] ( 𝐸 is the matrix of treatment effects). (10) If 𝑟 < replicates, go to 7; else if 𝑔 < groups, go to 6; else if 𝑖 < genes, go to 3; else end. Algorithm A2. Genes belonging to each cluster will be generated by randomly assigning a row from the rGEM according to the following algorithm. (1) Let 𝑘 be the number of clusters to construct, 𝑁 the total number of candidate genes, and 𝐿 a list of integers of size 𝑁 indexing the genes. (2) Initialize 𝐿 with the sequence 1 … 𝑁 . (3) “Order at random” 𝐿 . (4) Select a set ℎ of ( 𝑘 − 1 ) indexes between 1 and 𝑁 . If some of this ( 𝑘 − 1 ), are equal find another set of indexes. (5) Sort ℎ , in ascending order. (6) 𝐿 [ 1 ] is the index of first and 𝐿 [ ℎ [ 1 ] − 1 ] the index of the last gene belonging to cluster 1, and so on until 𝐿 [ ℎ [ 𝑘 − 1 ] is the index of first and 𝐿 [ 𝑁 ] the index of last gene belonging to cluster 𝑘 . Acknowledgments This work was partially supported by FONCyT-PICT 2005 Grant 32905 and PE INTA Grant AEBIO5471. The author are grateful to Dr. Ruth A. Heinz for critical reading of this manuscript. <h4>References</h4> J. W. Lee, J. B. Lee, M. Park, and S. H. Song, “ An extensive comparison of recent classification tools applied to microarray data ,” Computational Statistics and Data Analysis , vol. 48, no. 4, pp. 869–885, 2005. K. S. Pollard and M. J. van der Laan, “Cluster analysis of genomic data,” in Bioinformatics and Computational Biology Solutions Using R and Bioconductor , R. Gentleman, V. Carey, W. Huber, R. Irizarry, and S. Dudoit, Eds., pp. 209–229, Springer, New York, NY, USA, 2005. R. Gentleman and V. J. Carey, “Supervised machine learning,” in Bioconductor Case Studies , F. Hane, W. Huber, R. Gentleman, and S. Falcon, Eds., pp. 121–136, Springer, New York, NY, USA, 2008. R. B. Calinski and J. Harabasz, “A dendrite method for cluster analysis,” Communications in Statistics , vol. 3, no. 1, pp. 1–27, 1974. J. Hartigan, Clustering Algorithms , John Wiley & Sons, New York, NY, USA, 1975. W. S. Sarle, “The cubic clustering criterion,” SAS Technical Report A-108, SAS Institute, Cary, NC, USA, 1983. L. Kaufman and P. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis , John Wiley & Sons, New York, NY, USA, 1990. R. Tibshirani, G. Walther, and T. Hastie, “Estimating the number of clusters in a data set via the gap statistic,” Journal of the Royal Statistical Society: Series B , vol. 63, no. 2, pp. 411–423, 2001. C. Fraley and A. E. Raftery, “ Model-based clustering, discriminant analysis, and density estimation ,” Journal of the American Statistical Association , vol. 97, no. 458, pp. 611–631, 2002. C. Fraley and A. E. Raftery, “MCLUST version 3 for R: normal mixture modeling and model-based clustering,” Tech. Rep. 504, Department of Statistics, University of Washington, Seattle, Wash, USA, 2006. S. G. Valdano and J. A. Di Rienzo, “Discovering meaningful groups in hierarchical cluster analysis. An extension to the multivariate case of a multiple comparison method based on cluster analysis,” 2007, http://interstat.statjournals.net/YEAR/2007/abstracts/0704002.php . J. A. Di Rienzo, A. W. Guzmán, and F. Casanoves, “ A multiple-comparisons method based on the distribution of the root node distance of a binary tree ,” Journal of Agricultural, Biological, and Environmental Statistics , vol. 7, no. 2, pp. 129–142, 2002. K. S. Pollard, M. J. van der Laan, and G. Wall, “hopach: Hierarchical ordered partitioning and collapsing hybrid (HOPACH),” R-package version 2.4.0. , 2009, http://www.bioconductor.org/packages/release/bioc/html/hopach.html . E. Manduchi, L. M. Scearce, J. E. Brestelli, G. R. Grant, K. H. Kaestner, and C. J. Stoeckert Jr., “Comparison of different labeling methods for two-channel high-density microarray experiments,” Physiol Genomics , vol. 10, no. 3, pp. 169–179, 2002. Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: a practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society: Series B , vol. 57, no. 1, pp. 289–300, 1995. //

International Journal of Plant Genomics – Hindawi Publishing Corporation

**Published: ** Dec 20, 2011

Loading...

You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!

Read and print from thousands of top scholarly journals.

System error. Please try again!

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

Copy and paste the desired citation format or use the link below to download a file formatted for EndNote

Access the full text.

Sign up today, get DeepDyve free for 14 days.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.