DeepDyve requires Javascript to function. Please enable Javascript on your browser to continue.
An Improved Approach to Attribute Reduction with Ant Colony Optimization
An Improved Approach to Attribute Reduction with Ant Colony Optimization
Deng, Ting-quan; Wang, Xin-xia; Zhang, Yue-tong; Ma, Ming-hua
2010-06-01 00:00:00
Fuzzy Inf. Eng. (2010) 2: 145-155 DOI 10.1007/s12543-010-0042-9 ORIGINAL ARTICLE An Improved Approach to Attribute Reduction with Ant Colony Optimization Ting-quan Deng· Xin-xia Wang· Yue-tong Zhang · Ming-hua Ma Received: 18 March 2009/ Revised: 21 April 2010/ Accepted: 7 May 2010/ © Springer-Verlag Berlin Heidelberg and Fuzzy Information and Engineering Branch of the Operations Research Society of China 2010 Abstract Attribute reduction problem (ARP) in rough set theory (RST) is an NP- hard one, which is difficult to be solved via traditionally analytical methods. In this paper, we propose an improved approach to ARP based on ant colony optimization (ACO) algorithm, named the improved ant colony optimization (IACO). In IACO, a new state transition probability formula and a new pheromone traps updating formula are developed in view of the differences between a traveling salesman problem and ARP. The experimental results demonstrate that IACO outperforms classical ACO as well as particle swarm optimization used for attribute reduction. Keywords Ant colony optimization· Attribute reduction · Rough set theory 1. Introduction The theory of rough sets, proposed by Pawlak Z [1], is an extension of the sets the- ory for studying intelligent information systems. Attribute reduction (AR) in RST aims to select a subset of attributes from the original set of attributes while retaining the same classification ability as the original attributes or preserving a suitably high classification accuracy for the original attributes. In the real world, AR is necessary because of the abundance of noisy, irrelevant or misleading attributes. By remov- ing irrelevant and redundant attributes, it helps to improve the quality and speed of learning algorithms and to enhance the comprehensibility of the constructed models [2]. Ting-quan Deng () · Ming-hua Ma College of Science, Harbin Engineering University, Harbin, Heilongjiang 150001, P.R.China email: Deng.tq@hrbeu.edu.cn Xin-xia Wang College of Science, Heilongjiang Institute of Science and Technology, Harbin, Heilongjiang 150027, P.R.China Yue-tong Zhang Huawei Nanjing Research Institution, Huawei Technologies Co. Ltd, Nanjing, Jiangsu 210012, P.R.China 146 Ting-quan Deng · Xin-xia Wang· Yue-tong Zhang · Ming-hua Ma (2010) AR is an important application of rough set theory in information systems, which has been widely studied in [3, 4, 5, 6, 7]. It is well-known that AR is an NP-hard prob- lem. Therefore, many research efforts have shifted to develop optimal algorithms for AR, such as genetic algorithm (GA) [8, 9, 10], simulated annealing (SA), ACO [11, 12, 13], and more recently particle swarm optimization (PSO) [14]. These algorithms can often obtain high quality solutions. The basic idea of ACO is as follows [15]. Guided by pheromone trails and problem- dependent heuristic information, a colony of agents, called (artificial) ants, search in the solution space of a problem. On the basis of the ants’ traveling experience in the solution space, ACO tries to update pheromone trails so as to make the probability and the speed of constructing high quality solutions increase eventually. This paper improves ACO and applies the improved ACO (IACO) to AR. We discuss about the differences between traveling saleman problem (TSP) and ARP. In order to make the classical ACO more suitable for solving ARP, we bring up a new state transition probability formula and a new pheromone trap updating formula. Experimental results show that the performance of IACO is better than ACO and PSO in AR of data sets. In Section 2, the basic concepts, related to AR in rough set theory, are introduced. In Section 3, an improved ACO is proposed by establishing a new state transition probability formula and a new pheromone trap updating information. Section 4 shows the experimental results. Conclusion is given in Section 5. In Section 6, we bring up further studies in the future. 2. Attribute Reduction in Rough Set Theory Definition 1 The quaternion S = (U, A, V, f ) is called a knowledge representation system if U is a nonempty set of f inite objects, called universe; A is a nonempty f inite set of attributes; V = V , V is the values set of a; a∈A a a f : U × A → V is an information function, who assigns an information value to every attribute for every object, i.e.,∀a ∈ A, x ∈ U, f (x, a) ∈ V . A knowledge representation system is also called an information system. In brief, S = (U, A, V, f ) is denoted by S = (U, A) when no confusions happen. Let S = (U, A) be an information system. If A = C ∪ D and C ∩ D = Ø. Then S = (U, C ∪ D) is called a decision system or a knowledge representation system, where C is called a condition attribute set and D is called a decision attribute set. The knowledge representation system with both condition attribute set and decision attribute set is called a decision table. Definition 2 Let S = (U, A) be an information system. For an attribute subset P ⊆ A, P Ø, there is an associated indiscernibility relation ind(P): ind(P) = {(x, y) ∈ U |∀ a ∈ P, f (x, a) = f (y, a)}. (1) Fuzzy Inf. Eng. (2010) 2: 145-155 147 Each indiscernibility relation is an equivalence one. An equivalence class of ind(P), i.e., the block of the partition ind(P), containing x is denoted by [x] . Ac- tually, [x] = [x] . (2) P R R⊆P The family of all equivalence classes of ind(P), i.e., the partition of U determined by P, is denoted by U/P. The indiscernibility relation is the mathematical basis of rough set theory. Definition 3 Let S = (U, A) be an information system and P ⊆ A. For every subset X ⊆ U, the two subsets PX = {Y ∈ U/P | Y ⊆ X}, (3) PX = {Y ∈ U/P | Y ∩ X Ø} (4) are called the P-lower approximation set and the P-upper approximation set of X, respectively. The P-lower approximation of X is the set of all equivalence classes of U which can be totally included in X based on the attribute set P, while the P-upper approxi- mation of X is the set of all equivalence classes of U that have nonempty intersection with X based on the attribute set P. The set bn (X) = PX− PX is called the P-rough boundary region of X; pos (X) = P P PX is called the P-positive region of X and neg (X) = U−PX is called the P-negative region of X. Obviously, PX = pos (X)∪ bn (X). P P Let P, Q ⊆ A. pos (Q), called the P-positive region of Q, is def ined by pos (Q) = PX. (5) X∈U/Q Definition 4 Let P, Q ⊆ A. The dependency degree k of Q related to P is def ined by |pos (Q)| k = γ (Q) = , (6) |U| where|X| denotes the cardinality of set|X|. Each attribute may be taken on different signif icance degrees. To get the sig- nif icance degree of an attribute set, we study how classif ication changes when a subset is removed. If the change is remarkable, the signif icance degree is high; Oth- erwise, it is low. Definition 5 The significance degree of an attributes subset P ⊆ P related to Q is def ined by σ(P ) = γ (Q)−γ (Q). (7) P P−P 148 Ting-quan Deng · Xin-xia Wang· Yue-tong Zhang · Ming-hua Ma (2010) AR in RST extracts knowledge from an information system concisely through a f ilter-based way. It can preserve the contents of information while reducing the amount of attributes involved. A reduction of a decision system, based on the de- pendency degree, can be described by the following def inition. Definition 6 (Reduction) Let S = (U, A) be an information system. If A can be divided into two nonempty sets C and D satisfying C ∩ D = Ø, and C ⊆ C, then C is said to be a reduction of C related to D if γ (D) = γ (D), (8) C C and ∀C ⊂ C ,γ (D)<γ (D). (9) C C Specially, a reduction with minimal cardinality is called a minimal reduction.AR aims to f ind a minimal reduction. Its objective function is min|C |, (10) whereΩ is the set consisting of all reductions of C. In this paper, we only refer to relative reductions of condition attributes related to decision attributes in complete information systems. 3. Ant Colony Optimization and Improved Ant Colony Optimization IACO, the improved ant colony optimization algorithm for AR, follows the classical ACO algorithm structure for static combinatorial optimization problems. The classi- cal ACO algorithm performs as follows. At each cycle, guided by pheromone trails and heuristic information, every ant constructs a solution. After all ants have con- structed a solution, pheromone trails are updated. A candidate solution is conducted. The algorithm stops while the termination condition of a maximum number of cycles is met. In the following, the definitions of pheromone trails and heuristic information are discussed about first. Then it is described how to construct a candidate solution. The pheromone updating strategy is presented afterwards. The differences between IACO and ACO for attribute reduction are all shown in the first three subsections, through which one can find out the shortcomings of ACO and the advantages of IACO compared with each other for attribute reduction. Finally, the algorithm IACO for AR is presented. 3.1. Pheromone Trails and Heuristic Information ARP can be described as a complete graph G = (V, E), where V, the set of vertices, denotes the condition attribute set, and E denotes the set of edges fully connecting every two attributes. ARP then is modeled as the problem of finding a path such that the set of selected attributes is a reduction and the cardinality of this set is minimized [12]. Fuzzy Inf. Eng. (2010) 2: 145-155 149 Each edge is assigned with pheromone trails and heuristic information. If heuristic information in all edges is set to be a constant, then the heuristic information will not function at all. This case is identical to that with no heuristic information. In practice, The pheromone trails and heuristic information in edge (i, j) are usually used to compute the probability of choosing from vertex i to vertex j or from vertex j to vertex i. In ACO for AR, the dependency degree of the decision attribute set D related to condition attribute subset{a , a } is selected as the heuristic information of i j edge (i, j), i.e., ∀a , a ∈ C, η = |pos (D)|/|U| . While in IACO, the heuristic i j ij {a ,a } i j information used is on the basis of the significance degree of every set containing two condition attributes, i.e., ∀a , a ∈ C, η = σ({a , a }). The initial value of i j ij i j pheromone trails of edge (i, j) is a constant. 3.2. Constructing Candidate Solutions Now we discuss about the difference between ARP and TSP in order to show the shortcoming of ACO for AR and demonstrate the advantage of IACO in the state transition probability formula. The tabu list, tabu , represents the vertex set that the kth ant has selected, and allowed = V − tabu represents the vertex set that the ant may choose in the next k k step. In TSP, an ant must travel around all cities, and its traveling order constructs a solution. In ARP, however, once the selected vertex set C of an ant has satisfied the necessary condition of ARP, the ant stops traveling, and C is a candidate solution. For example, in relative attribute reduction, ifγ (D) = γ (D), the ant stops traveling, C C and the candidate solution C contains a reduction C of C. In TSP, suppose current node of the kth ant is i, the tabu list is tabu , and the next chosen node is j ∈ allowed , i.e., the kth ant will choose the next node only starting 0 k from the current node. In the ARP, however, it is unnecessary to do so. The main reason lies in that AR is just a subset of all attributes and it is only relevant to which attributes they have, but irrelevant to the permutation of the attributes selected. For example, a reduction{a , a , a } is equal to {a , a , a }. 8 6 1 1 6 8 ACO for attribute reduction, neglecting the above-mentioned characteristic of ARP, determines its state transition probability of the kth ant from vertex i to vertex j in the tth cycle as follows: α β [τ (t)] [η ] ⎪ ij ij ⎨ , j ∈ allowed , α β p (t) = Σ [τ (t)] [η ] (11) s∈allowed is is ij k 0, j allowed , while in IACO for attribute reduction, we take the characteristic into full considera- tion, and revise the state transition probability of the kth ant from vertex i to vertex j in the tth cycle as follows: α β sup {[τ (t)] [η ] } lj lj ⎪ l∈tabu , j ∈ allowed , ⎨ k α β p (t) = (12) Σ sup {[τ (t)] [η ] } ⎪ s∈allowed ls ls ij k l∈tabu ⎪ k 0, j allowed , where τ (t) and η are the pheromone value and heuristic information value of edge lj lj {a , a }, respectively. α, called information heuristic factor, and β, called expectation l j 150 Ting-quan Deng · Xin-xia Wang· Yue-tong Zhang · Ming-hua Ma (2010) heuristic factor, are two parameters that can efficiently adjust the relative significance of pheromone trails and heuristic information. The strategy of IACO makes the probability of searching around the optimal solu- tion higher than that of ACO for attribute reduction. The construction process stops once the condition γ (D) = γ (D), (13) C C holds. Thus a candidate solution C is constructed by an ant. 3.3. Pheromone Updating In ACO for AR, only the present optimal candidate solution is chosen to update pheromone trails, while in IACO for attribute reduction, all candidate solutions are used to update pheromone trails. After all ants have constructed a solution in the tth cycle, pheromone trails are updated via the following rule: τ (t+ 1) = (1−ρ)·τ (t)+Δτ (t), (14) ij ij ij Δτ (t) = Δτ (t), (15) ij ij k=1 whereρ denotes the pheromone volatile coefficient, while 1−ρ denotes the pheromone residual factor. In case of the infinite calculation of pheromone trails, ρ is confined to (0, 1]. Because a reduction is irrelevant to the permutation of the attributes, we also adjust Δτ (t), the increment of pheromone. Suppose the candidate solution constructed by ij the kth ant in the tth cycle is L (t), then , i ∈ L (t) and j ∈ L (t), ⎨ k k Δτ (t) = |L (t)| (16) ⎪ k ij 0, i L (t)or j L (t), k k where I, the pheromone intensity, is a constant. All above means that, in IACO, all the ants are selected to update pheromone trails. Moreover, for each ant, the pheromone trails of the edges connecting each two different vertices are reinforced. The tactics above can prevent the algorithm from sinking into a local optimal solution efficiently. 3.4. IACO for Attribute Reduction The specific procedure of the algorithm IACO is presented as follows. In this algo- rithm, red means an optimal candidate solution, Nc indicates a cycle index, Nc max is a maximum value of Nc, m stands for the total number of ants, and k is an index number of the kth ant. A constant is shown by const. The new IACO algorithm can be described as follows: Input: (U, C ∪ D, V, f ) Output: red Fuzzy Inf. Eng. (2010) 2: 145-155 151 1: Ø → red,0 → Nc,|C|→ m,0 → Δτ (0) ij 2: τ (0) = const for i j, i, j |C| ij 3: initialize every other parameter: α,β,ρ, I, and Nc max 4: for i j, i, j |C| 5: computeη = σ(a , a ) ij i j 6: end for 7: Nc+ 1 → Nc 8: 0 → k 9: k+ 1 → k 10: a → tabu (Nc) k k 11: the kth ant chooses the next condition attribute a on the basis of the state transi- tion probability formula (12), tabu (Nc)∪{a }→ tabu (Nc) k j k 12: if γ (D) γ (D) tabu (Nc) C 13: goto 11 14: else 15: tabu (Nc) → red (Nc) k k 16: return red (Nc) 17: end if 18: if k < m 19: goto 9 20: else , a ∈ tabu (Nc− 1) and a ∈ tabu (Nc− 1) ⎨ i k j k |tabu (Nc− 1)| 21: computeΔτ (Nc−1) = ⎪ k ij ⎪ 0, a tabu (Nc− 1) or a tabu (Nc− 1) i k j k 22: Δτ (Nc− 1) = Δτ (Nc− 1) ij ij k=1 23: τ (Nc) = (1−ρ)·τ (Nc− 1)+Δτ (Nc− 1) ij ij ij 24: end if 25: if Nc Nc max 26: goto 7 27: else 28: compute k , Nc such that |red (Nc )| = min min |red (Nc)| 0 0 k 0 k Nc k 29: red (Nc ) → red k 0 30: end if 31: end 4. Experiments and Analysis In the experiments, four standard data sets from UCI repository [16], shown in Table 1, are used for testing our algorithm. The column ‘Instance’ indicates the number of instances. Likewise, the column ‘Condition attributes’ and the column ‘Decision attributes’ represent the cardinalities of condition attribute sets and decision attribute sets, respectively. 4.1. Algorithm Description and Parameters Settings If ants select the next attribute according to the principle of maximum state transi- tion probability, the candidate solution they construct will contain more unimportant 152 Ting-quan Deng · Xin-xia Wang· Yue-tong Zhang · Ming-hua Ma (2010) Table 1: The original decision information systems. Data set Instance Condition attributes Decision attributes Dermatology 358 34 1 Lung Cancer 32 56 1 Promoter Gene Sequences 106 57 1 SPECT Heart 267 22 1 attributes (reducible attributes). So in this algorithm ants select the next attribute randomly according to state transition probabilities. Firstly, by adding up all the tran- sition probabilities of the kth ant from vertex i to all vertices j ∈ C − tabu step by step, some intervals are obtained which construct a partition of the interval [0, 1]. Then a random number is generated for determining which attribute to be selected and then to be added into tabu . In addition, when the significance of two attributes is used as the value of heuristic information, it will be equal to 0 or be approximately 0. To avoid this situation, (m = |C|, the cardinality of the condition attribute set C) is added to all the value of heuristic information. IACO is implemented in Matlab. When there is no expectation heuristic informa- tion, η is assumed to be 1; when there is expectation heuristic information, η will ij ij change according to the significance degree of attributes a and a in the set of condi- i j tion attributes. The parameters are taken to be the following values: α = 0.9,β = 0.1, and the initial pheromone value is set to be 1, I = 1, the number of ants is m. And the maximum number of cycles is 3. These parameters are determined based on a small number of preliminary runs. 4.2. Experimental Results and Analysis For each data set, the number of attributes removed from the condition attribute set is listed in Table 2. The smallest cardinality obtained after 3 runs is given according to having added expectation heuristic information or not. In parentheses is the number of attributes removed from the condition attribute set based on the optimal candidate solutions. Table 2: Experimental results of attribute reduction. Data set Attributes Without EHI With EHI Dermatology 34 26(26) 26(26) Lung Cancer 56 50(50) 50(50) Promoter Gene Sequences 57 52(52) 52(52) SPECT Heart 22 4(5) 5(5) Note: Without EHI–Experimental results without expectation heuristic information; with EHI–Experimental results with expectation heuristic information. Table 2 shows that the attributes are reduced at a great level by using the improved method in comparison to Table 8. What’s more, its optimal candidate solutions are almost reductions of condition attribute sets. In our algorithm, the optimal candidate solutions acquired by expectation heuristic information are better than those acquired Fuzzy Inf. Eng. (2010) 2: 145-155 153 without expectation heuristic information, and this conclusion can be illustrated in detail in the following tables. The reduction results of all the four data sets are listed in the following Table 3- 6. The numbers in each table represent the cardinalities of candidate reductions of attributes. The tables on the left are results without heuristic information, while the right ones are results with heuristic information. In each table, the first column rep- resents the sequence numbers of runs, and the left three columns contain the smallest cardinalities of set of the candidate reductions of attributes obtained at every cycle in each run. Table 3(a): Reduction of dermatology without Table 3(b): Reduction of dermatology with heuristic information. heuristic information. Cycle 1 Cycle 2 Cycle 3 Cycle 1 Cycle 2 Cycle 3 Run 1 10 11 9 Run 1 8 8 9 Run 2 9 9 8 Run 2 11 8 9 Run 3 11 8 9 Run 3 11 8 9 Table 4(a): Reduction of lung cancer without Table 4(b): Reduction of lung cancer with heuristic information. heuristic information. Cycle 1 Cycle 2 Cycle 3 Cycle 1 Cycle 2 Cycle 3 Run 1 7 8 6 Run 1 7 8 6 Run 2 8 6 6 Run 2 6 7 7 Run 3 7 8 6 Run 3 7 8 6 Table 5(a): Reduction of promoters gene Table 5(b): Reduction of promoters gene sequences without heuristic information. sequences with heuristic information. Cycle 1 Cycle 2 Cycle 3 Cycle 1 Cycle 2 Cycle 3 Run 1 5 5 5 Run 1 5 5 5 Run 2 5 5 5 Run 2 5 5 5 Run 3 5 5 5 Run 3 5 5 5 Table 6(a): Reduction of SPECT heart without Table 6(b): Reduction of SPECT heart with heuristic information. heuristic information. Cycle 1 Cycle 2 Cycle 3 Cycle 1 Cycle 2 Cycle 3 Run 1 18 20 20 Run 1 20 18 19 Run 2 19 17 20 Run 2 18 20 20 Run 3 19 18 18 Run 3 19 18 18 From the tables above, we find that IACO is able to reduce attributes efficiently and that expectation heuristic information contributes to achieve a better result for attribute reduction. But it is a pity that the cardinalities of candidate solutions to AR don’t decrease gradually. The main reason for this situation lies in that the number of ants is set to equal the cardinality of the condition attribute set. All the cardinalities of sets of candidate solutions are equal to five in both Table 5(a) and Table 5(b). This particular regularity results from the properties of the data set, instead of from the algorithm in general terms. 154 Ting-quan Deng · Xin-xia Wang· Yue-tong Zhang · Ming-hua Ma (2010) 4.3. Comparison of IACO with ACO and with PSO Two data sets, Dermatology and Lung Cancer, selected from Table 1, are employed for comparison of the proposed IACO algorithm (with expectation heuristic infor- mation) with ACO and with PSO in validity for attribute reduction. The parameters in ACO are set to be the same as in IACO, and the parameter values of PSO are presented in Table 7. Table 8 presents the comparative results. Each number in the third column to the fifth column stands for the cardinality of each optimal candidate solution obtained through the three different approaches. Table 7: The parameter settings of PSO. Population Generation c1 c2 αβ Weight Velocity 20 100 2.0 2.0 0.8 0.2 1.4-0.4 1-|C|/3 Table 8: Comparative results of IACO with ACO and with PSO. Data set Attributes ACO PSO IACO Dermatology 34 10 9 8 Lung Cancer 56 7 7 6 It can be easily observed from Table 8 that IACO is more efficient than ACO and than PSO in attribute reduction. 5. Conclusion This paper proposes an improved approach to attribute reduction based on ant colony optimization algorithm named IACO. By experiments we arrive at the conclusion that the improved ant colony algorithm IACO is more effective, and its optimal candidate solution is almost a reduction. Meanwhile, the results of reduction for the most data sets are superior to those based on ACO and on PSO. Moreover, it is shown that the optimal candidate solutions acquired by expectation heuristic information are better than those acquired without expectation heuristic information in IACO. In one word, by changing the state transition probability formula and the pheromone traps updating formula, the improved ant colony optimization is more suitable for solving attribute reduction problem. 6. Future Work In the first place, we will go on studying and improving algorithms more suitable for attribute reduction. Carrying out more experiments on all kinds of data sets about attribute reduction may help to find more helpful information. Besides, we will study the condition which can prevent the algorithm from falling into local optimum. Finally, we will study how to acquire optimal candidate solutions without redundant attributes, which means those optimal candidate solutions are just the reductions of attribute sets. Acknowledgments This work is partly supported by National Natural Science Foundation of China (No.10771043), the Key Laboratory for National Defence Science and Technology of Fuzzy Inf. Eng. (2010) 2: 145-155 155 Autonomous Underwater Vehicles of Harbin Engineering University (No.002010260 730), and the Support Project for Young Scholars in General Institutions of Higher Learning of Heilongjiang Province (No.1151J076). The preliminary version of some of this work appeared in [17]. The authors would like to thank the reviewers for their valuable suggestions for improving this paper. References 1. Pawlak Z (1982) Rough sets. International Journal of Computer and Information Sciences, 11: 341- 2. Theodoridis S, Koutroumbas K (2006) Pattern recognition. Academic Press, New York 3. Pawlak Z (1991) Rough sets: theoretical aspects of reasoning about data. Kluwer Academic Publish- ers, Boston 4. Pawlak Z (1996) Rough sets and data analysis. Proceedings of the Asian Fuzzy Systems Symposium, 1-6 5. Skowron A, Pal S K (2003) Rough sets, pattern recognition, and data mining. Pattern Recognition Letters, 24: 829-933 6. Swiniarski R W, Skowron A (2003) Rough set methods in feature selection and recognition. Pattern Recognition Letters, 24: 833–849 7. Pawlak Z, Skowron A (2007) Rudiments of rough sets. Information Sciences, 177: 3-27 8. Wong S K M, Ziarko W (1985) On optional decision rules in decision tables. Bulletin of Polish Academy of Science, 33: 693–696 9. Jensen R, Shen Q (2004) Semantics-preserving dimensionality reduction: rough and fuzzy-rough- based approaches. IEEE Transactions on Knowledge Data Engineering, 16: 1457-1471 10. Bazan J, Nguyen H S, Nguyen S H, Synak P, Wroble ´ wski J (2000) Rough set algorithms in clas- sification problem. In: Polkowski L, Tsumoto S, and Lin T Y (eds) Rough Set Methods and Appli- cations. Physica-Verlag, Heidelberg, New York, 49-88 11. Wroble ´ wski J (1995) Finding minimal reducts using genetic algorithms. Proceedings of the 2nd Annual Joint Conference on Information Sciences, Wrightsville Beach, NC, 186-189 12. Ke Liangjun, Feng Zuren, Ren Zhigang (2008) An efficient ant colony optimization approach to attribute reduction in rough set theory. Pattern Recognition Letters, 29: 1351-1357 13. Jensen R, Shen Q (2003) Finding rough set reducts with ant colony optimization. Proceedings of UK Workshop on Computational Intelligence, 15-22 14. Wang X, Yang J, Teng X, Xia W, Jensen R (2007) Feature selection based on rough sets and particle swarm optimization. Pattern Recognition Letters, 28: 459-471 15. Dorigo M, Maniezzo V, Colorni A (1996) Ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 26: 29-41 16. Asuncion A, Newman D J (2007) UCI Machine Learning Repository, University of California, School of Information and Computer Science, Irvine, CA [http://www.ics.uci.edu/∼mlearn/MLRepository.html] 17. Deng T Q, Yang C D, Zhang Y T, Wang X X (2009) An improved ant colony optimization applied to attributes reduction. In˖ Cao B Y, et al (eds) Advances in Soft Computing: Fuzzy Information and Engineering. Springer-Verlag, 54: 1-6
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png
Fuzzy Information and Engineering
Taylor & Francis
http://www.deepdyve.com/lp/taylor-francis/an-improved-approach-to-attribute-reduction-with-ant-colony-j7PfQYv0Iy