Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

An Interactive Fuzzy Satisficing Method for Multiobjective Nonlinear Integer Programming Problems with Block-Angular Structures through Genetic Algorithms with Decomposition Procedures

An Interactive Fuzzy Satisficing Method for Multiobjective Nonlinear Integer Programming Problems... Hindawi Publishing Corporation Advances in Operations Research Volume 2009, Article ID 372548, 17 pages doi:10.1155/2009/372548 Research Article An Interactive Fuzzy Satisficing Method for Multiobjective Nonlinear Integer Programming Problems with Block-Angular Structures through Genetic Algorithms with Decomposition Procedures Masatoshi Sakawa and Kosuke Kato Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527, Japan Correspondence should be addressed to Masatoshi Sakawa, sakawa@hiroshima-u.ac.jp Received 19 August 2008; Revised 27 March 2009; Accepted 29 May 2009 Recommended by Walter J. Gutjahr We focus on multiobjective nonlinear integer programming problems with block-angular structures which are often seen as a mathematical model of large-scale discrete systems optimization. By considering the vague nature of the decision maker ’s judgments, fuzzy goals of the decision maker are introduced, and the problem is interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. For deriving a satisficing solution for the decision maker, we develop an interactive fuzzy satisficing method. Realizing the block-angular structures that can be exploited in solving problems, we also propose genetic algorithms with decomposition procedures. Illustrative numerical examples are provided to demonstrate the feasibility and efficiency of the proposed method. Copyright q 2009 M. Sakawa and K. Kato. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Introduction Genetic algorithms GAs1, initiated by Holland, his colleagues, and his students at the University of Michigan in the 1970s, as stochastic search techniques based on the mechanism of natural selection and natural genetics, have received a great deal of attention regarding their potential as optimization techniques for solving discrete optimization problems or other hard optimization problems. Although genetic algorithms were not much known at the beginning, after the publication of Goldberg’s book 2, genetic algorithms have recently attracted considerable attention in a number of fields as a methodology for optimization, adaptation, and learning. As we look at recent applications of genetic algorithms to 2 Advances in Operations Research optimization problems, especially to various kinds of single-objective discrete optimization problems and/or to other hard optimization problems, we can see continuing advances 3– 13. Sakawa et al. proposed genetic algorithms with double strings GADS14 for obtaining an approximate optimal solution to multiobjective multidimensional 0-1 knapsack problems. They also proposed genetic algorithms with double strings based on reference solution updating GADSRSU15 for multiobjective general 0-1 programming problems involving both positive coefficients and negative ones. Furthermore, they proposed genetic algorithms with double strings using linear programming relaxation GADSLPR16 for multiobjective multidimensional integer knapsack problems and genetic algorithms with double strings using linear programming relaxation based on reference solution updating GADSLPRRSU for linear integer programming problems 17. Observing that some solution methods for specialized types of nonlinear integer programming problems have been proposed 18–23, as an approximate solution method for general nonlinear integer programming problems, Sakawa et al. 24 proposed genetic algorithms with double strings using continuous relaxation based on reference solution updating GADSCRRSU. In general, however, actual decision making problems formulated as mathematical programming problems involve very large numbers of variables and constraints. Most of such large-scale problems in the real world often have special structures that can be exploited in solving problems. One familiar special structure is the block-angular structure to the constraints and several kinds of decomposition methods for linear and nonlinear programming problems with block-angular structure have been proposed 25. Unfortunately, however, for large-scale problems with discrete variables, it seems quite difficult to develop an efficient solution method for obtaining an exact optimal solution. For multidimensional 0-1 knapsack problems with block-angular structures, by utilizing the block-angular structures that can be exploited in solving problems, Sakawa et al. 9, 26 proposed genetic algorithms with decomposition procedures GADPs. For dealing with multidimensional 0-1 knapsack problems with block angular structures, using triple string representation, Sakawa et al. 9, 26 presented genetic algorithms with decomposition procedures. Furthermore, by incorporating the fuzzy goals of the decision maker, they 9 also proposed an interactive fuzzy satisficing method for multiobjective multidimensional 0-1 knapsack problems with block angular structures. Under these circumstances, in this paper, as a typical mathematical model of large- scale multiobjective discrete systems optimization, we consider multiobjective nonlinear integer programming problems with block-angular structures. By considering the vague nature of the decision maker ’s judgments, fuzzy goals of the decision maker are introduced, and the problem is interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. For deriving a satisficing solution for the decision maker, we develop an interactive fuzzy satisficing method. Realizing the block-angular structures that can be exploited in solving problems, we also propose genetic algorithms with decomposition procedures for nonlinear integer programming problems with block-angular structures. The paper is organized as follows. Section 2 formulates multiobjective nonlinear integer programming problems with block-angular structures. Section 3 develops an interactive fuzzy satisficing method for deriving a satisficing solution for the decision maker. Section 4 proposes GADPCRRSU as an approximate solution method for nonlinear integer programming problems with block-angular structures. Section 5 provides illustrative numerical examples to demonstrate the feasibility and efficiency of the proposed method. Finally the conclusions are considered in Section 6 and the references. Advances in Operations Research 3 2. Problem Formulation Consider multiobjective nonlinear integer programming problems with block-angular struct- ures formulated as 1 P minimize f x  f x ,..., x ,l  1, 2,...,k l l 1 P subject to gx  g x ,..., x ≤ 0 1 1 h x ≤ 0 2.1 . . P P h x ≤ 0 J J x ∈ 0, 1,...,V ,J  1, 2,...,P, j  1, 2,...,n , j j where x , J  1, 2,...,P,are n dimensional integer decision variable column vectors and T T T T 1 P x x  ,..., x   . The constraints gxg x,...,g x ≤ 0 are called as coupling 1 m J J J J 1 J constraints with m dimension, while each of constraints h x h x ,...,h x  ≤ 0, 0 m J  1, 2,...,P, is called as block constraints with m dimension. In 2.1, it is assumed that J J f ·, g ·, h · are general nonlinear functions. The positive integers V , J  1, 2,...,P , l i i j j  1, 2,...,n , represent upper bounds for x . In the following, for notational convenience, the feasible region of 2.1 is denoted by X. As an example of nonlinear integer programming problems with block-angular structures in practical applications, Bretthauer et al. 27 formulated health care capacity planning, resource constrained production planning, and portfolio optimization with industry constraints. 3. An Interactive Fuzzy Satisficing Method In order to consider the vague nature of the decision maker ’s judgments for each objective function in 2.1, if we introduce the fuzzy goals such as “f x should be substantially less than or equal to a certain value,” 2.1 can be rewritten as maximize μ f x ,...,μ f x , 3.1 1 1 k k x∈X where μ · is the membership function to quantify the fuzzy goal for the lth objective function in 2.1. To be more specific, if the decision maker feels that f x should be less than or equal 0 1 0 to at least f and f x ≤ f ≤ f  is satisfactory, the shape of a typical membership function l l l is shown in Figure 1. Since 3.1 is regarded as a fuzzy multiobjective optimization problem, a complete optimal solution that simultaneously minimizes all of the multiple objective functions 4 Advances in Operations Research μ f x l l 1 0 0 f f f x l l Figure 1: An example of membership functions. does not always exist when the objective functions conflict with each other. Thus, instead of a complete optimal solution, as a natural extension of the Pareto optimality concept for ordinary multiobjective programming problems, Sakawa et al. 28, 29 introduced the concept of M-Pareto optimal solutions which is defined in terms of membership functions instead of objective functions, where M refers to membership. Definition 3.1 M-Pareto optimality. A feasible solution x ∈ X is said to be M-Pareto optimal to a fuzzy multiobjective optimization problem if and only if there does not exist another ∗ ∗ feasible solution x ∈ X such as μ f x ≥ μ f x , l  1, 2,...,k, and μ f x >μ f x l l l l j j j j for at least one j ∈{1, 2,...,k}. Introducing an aggregation function μ x for k membership functions in 3.1,the problem can be rewritten as maximize μ x, 3.2 x∈X where the aggregation function μ · represents the degree of satisfaction or preference of the decision maker for the whole of k fuzzy goals. In the conventional fuzzy approaches, it has been implicitly assumed that the minimum operator is the proper representation of the decision maker ’s fuzzy preferences. However, it should be emphasized here that this approach is preferable only when the decision maker feels that the minimum operator is appropriate. In other words, in general decision situations, the decision maker does not always use the minimum operator when combining the fuzzy goals and/or constraints. Probably the most crucial problem in 3.2 is the identification of an appropriate aggregation function which well represents the decision maker ’s fuzzy preferences. If μ · can be explicitly identified, then 3.2 reduces to a standard mathematical programming problem. However, this rarely happens, and as an alternative, an interaction with the decision maker is necessary to find a satisficing solution for 3.1. In order to generate candidates of a satisficing solution which are M-Pareto optimal, the decision maker is asked to specify the aspiration levels of achievement for all membership functions, called reference membership levels. For reference membership levels given by the decision maker μ , l  1, 2,...,k, the corresponding M-Pareto optimal solution to µ, which is the nearest to the requirements in the minimax sense or better than that if the reference Advances in Operations Research 5 membership levels are attainable, is obtained by solving the following augmented minimax problem: ⎧ ⎫ ⎨ k ⎬ minimize max μ − μ f x  ρ μ − μ f x , 3.3 l l j j l j ⎩ ⎭ x∈X l1,2,...,k j1 where ρ is a sufficiently small positive real number. We can now construct an interactive algorithm in order to derive a satisficing solution for the decision maker from among the M-Pareto optimal solution set. The procedure of the interactive fuzzy satisficing method is summarized as follows. 3.1. An Interactive Fuzzy Satisficing Method Step 1. Calculate the individual minimum and maximum of each objective function under the given constraints by solving the following problems: minimize f x,l  1, 2,...,k, x∈X 3.4 maximize f x,l  1, 2,...,k. x∈X Step 2. By considering the individual minimum and maximum of each objective function, the decision maker subjectively specifies membership functions μ f x, l  1, 2,...,k, to l l quantify fuzzy goals for objective functions. Step 3. The decision maker sets initial reference membership levels μ , l  1, 2,...,k. Step 4. For the current reference membership levels, solve the augmented minimax problem 3.3 to obtain the M-Pareto optimal solution and the membership function value. Step 5. If the decision maker is satisfied with the current levels of the M-Pareto optimal solution, stop. Then the current M-Pareto optimal solution is the satisficing solution of the decision maker. Otherwise, ask the decision maker to update the current reference membership levels μ , l  1, 2,...,k, by considering the current values of the membership functions and return to Step 4. In the interactive fuzzy satisficing method, it is required to solve nonlinear integer programming problems with block-angular structures 3.3 together with 3.4.Itis significant to note that these problems are single objective integer programming problems with block-angular structures. Realizing this difficulty, in the next section, we propose genetic algorithms with decomposition procedures using continuous relaxation based on reference solution updating GADPCRRSU. 4. Genetic Algorithms with Decomposition Procedures As discussed above, in this section, we propose genetic algorithms with decomposition pro- cedures using continuous relaxation based on reference solution updating GADPCRRSU 6 Advances in Operations Research ν1 ν2 ··· νn y y y ν1 ν2 ··· νn Figure 2: Double string. as an approximate solution method for nonlinear integer programming problems with block- angular structures. Consider single-objective nonlinear integer programming problems with block- angular structures formulated as 1 P minimize fx  f x ,..., x 1 P subject to gx  g x ,..., x ≤ 0 1 1 h x ≤ 0 . 4.1 . . P P h x ≤ 0 J J x ∈ 0, 1,...,V ,J  1, 2,...,P, j  1, 2,...,n . j j Observe that this problem can be viewed as a single-objective version of the original problem 2.1. Sakawa et al. 24 have already studied genetic algorithms with double strings using continuous relaxation based on reference solution updating GADSCRRSU for ordinary nonlinear integer programming problems formulated as minimize fx subject to g x ≤ 0,i  1, 2,...,m, i 4.2 x ∈ 0, 1,...,V ,j  1, 2,...,n, j j where an individual is represented by a double string. In a double string as is shown in Figure 2, for a certain j , νj ∈{1, 2,...,n} represents an index of a variable in the solution space, while y , j  1, 2,...,n, does the value among {0, 1,...,V } of the νjth variable νj νj x . νj In view of the block-angular structure of 4.1, it seems to be quite reasonable to define an individual S as an aggregation of P subindividuals s , J  1, 2,...,P , corresponding to the J J block constraint h x  ≤ 0 as shown in Figure 3. If these subindividuals are represented by double strings, for each of subindividuals s , J  1, 2,...,P , a phenotype subsolution satisfying each of the block constraints can be obtained by the decoding algorithm in GADSCRRSU. Unfortunately, however, the simple combination of these subsolutions does not always satisfy the coupling constraints gx ≤ 0. To cope with this problem, a triple string representation as shown in Figure 4 and the corresponding decoding algorithm are presented as an extension of the double string representation and the corresponding decoding Advances in Operations Research 7 1st block 2nd block P th block n n n 1 2 P 0 1 ··· 00 0 ··· 1 ··· 1 1 ··· 0 ··· ··· ··· 01 0 00 1 1 1 0 Subindividual Subindividual Subindividual 1 2 P s s s Figure 3: Division of an individual into P subindividuals. J J J J S  ν 1 ν 2 ··· ν nJ J J J y y ··· y J J J ν 1 ν 2 ν nJ Figure 4: Triple string. algorithm. By using the proposed representation and decoding algorithm, a phenotype solution satisfying both the block constraints and coupling constraints can be obtained for 1 2 P each individual S s , s ,..., s . To be more specific, in a triple string which represents a subindividual corresponding J J to the J th block, r ∈{1, 2,...,P} represents the priority of the J th block, each ν j ∈ {1, 2,...,n } is an index of a variable in phenotype and each y takes an integer value ν j among {0, 1,...,V }. As in GADSCRRSU, a feasible solution, called a reference solution, is ν j necessary for decoding of triple strings. In our proposed GADPCRRSU, the reference solution is obtained as a solution x to a minimization problem of constraint violation. In the following, we summarize the decoding algorithm for triple strings using a reference solution x , where N is the number of individuals and I is a counter for the individual number. 4.1. Decoding Algorithm for Triple String Step 1. Let I : 1. Step 2. If 1 ≤ I ≤N/2,goto Step 3. Otherwise, go to Step 11. Step 3. Let x : 0, r : 1, L : 0. Step 4. Find J ∈{1, 2,...,P} such that r  r.Let j : 1, l : 0. J J Step 5. Let x : y . J J v j ν j J J Step 6. If gx ≤ 0 and h x  ≤ 0,let L : r , l : j , j : j  1and go to Step 7. Otherwise, let j : j  1and go to Step 7. Step 7. If j> n ,let r : r  1and go to Step 8. Otherwise, go to Step 5. Step 8. If r> P,goto Step 9. Otherwise, go to Step 4. 8 Advances in Operations Research Step 9. If L  0and l  0, go to Step 11. Otherwise, go to Step 10. Jr Jr Jr Step 10. Find Jr such that r  r for r  1,...,L − 1. Then, let x : y , j Jr Jr ν j ν j JL JL JL 1, 2,...,n . Furthermore, find JL such that r  L and let x : y , j  1, 2,...,l. Jr JL JL ν j ν j The remainder elements of x are set to 0. Terminate the decoding process. Step 11. Let x : x , r : 1and go to Step 12. Step 12. Find J ∈{1, 2,...,P} such that r  r and let j : 1. J J J ∗J J ∗J Step 13. Let x : y .If y  x ,let j : j  1and go to Step 15.If y  x ,go J J J J J J ν j ν j ν j ν j ν j ν j to Step 14. J ∗J J J Step 14. If gx ≤ 0 and h x  ≤ 0,let j : j  1and go to Step 15. Otherwise, let x : x , J J ν j ν j j : j  1and go to Step 15. Step 15. If j ≤ n ,goto Step 13. Otherwise, let r : r  1and go to Step 16. Step 16. If r ≤ P,goto Step 12. Otherwise, I : I  1and go to Step 17. Step 17. If I ≤ N,goto Step 2. Otherwise, terminate the decoding process. It is expected that an optimal solution to the continuous relaxation problem becomes a good approximate optimal solution of the original nonlinear integer programming problem. In the proposed method, after obtaining an approximate optimal solution x , J  1, 2,...,P , j  1, 2,...,n to the continuous relaxation problem, we suppose that each decision variable J J x takes exactly or approximately the same value that x does. In particular, decision variables j j J J x such as x  0 are very likely to be equal to 0. j j To be more specific, the information of the approximate optimal solution x to the continuous relaxation problem of 4.1 is used when generating the initial population and performing mutation. In order to generate the initial population, when we determine the value of each y in the lowest row of a triple string, we use a Gaussian random variable ν j J J with mean x and variance σ . In mutation, when we change the value of y for some J J ν j ν j J, j, we also use a Gaussian random variable with mean x and variance τ . ν j Various kinds of reproduction methods have been proposed. Among them, Sakawa et al. 14 investigated the performance of each of six reproduction operators, that is, ranking selection, elitist ranking selection, expected value selection, elitist expected value selection, roulette wheel selection, and elitist roulette wheel selection, and as a result confirmed that elitist expected value selection is relatively efficient for multiobjective 0-1 programming problems incorporating the fuzzy goals of the decision maker. Thereby, the elitist expected value selection—elitism and expected value selection combined together—is adopted. Here, elitism and expected value selection are summarized as follows. Elitism If the fitness of an individual in the past populations is larger than that of every individual in the current population, preserve this string into the current generation. Advances in Operations Research 9 Expected Value Selection For a population consisting of N individuals, the expected number of each s , J  1, 2,...,P , each subindividual of the nth individual S , in the next population, is given by fS N  × N. 4.3 fS n1 Then, the integral part of N  N  denotes the definite number of s preserved in the next n n n population. While, using the decimal part of N  N −N , the probability to preserve s , n n n J  1, 2,...,P , in the next population is determined by N − N n n . 4.4 N − N n n n1 If a single-point crossover or multipoint crossover is directly applied to upper or middle string of individuals of triple string type, the kth element of the string of an offspring may take the same number that the k th element takes. The same violation occurs in solving the traveling salesman problems or scheduling problems through genetic algorithms. In order to avoid this violation, a crossover method called partially matched crossover PMX is modified to be suitable for triple strings. PMX is applied as usual for upper strings, whereas, for a couple of middle string and lower string, PMX for double strings 14 is applied to every subindividual. It is now appropriate to present the detailed procedures of the crossover method for triple strings. 4.2. Partially Matched Crossover (PMX) for Upper String Let 1 2 P X  r ,r ,..., r 4.5 X X X be the upper string of an individual and let 1 2 P Y  r ,r ,..., r 4.6 Y Y Y be the upper string of another individual. Prepare copies X and Y of X and Y , respectively. Step 1. Choose two crossover points at random on these strings, say, h and k h<k. Step 2. Set i : h and repeat the following procedures. J J i i a Find J such that r  r . Then, interchange r with r and set i : i  1. X Y X X b If i>k,stopand let X be the offspring of X. Otherwise, return to a. Step 2 is carried out for Y in the same manner, as shown in Figure 5. 10 Advances in Operations Research 4.3. Partially Matched Crossover (PMX) for Double String Let J J J ν 1 ,ν 2 ,...,ν n X X X X  4.7 J J J y ,y ,..., y J J J ν 1 ν 2 ν n X X X be the middle and lower part of a subindividual in the J th subpopulation, and J J J ν 1 ,ν 2 ,..., ν n Y Y Y Y  4.8 J J J y ,y ,..., y J J J ν 1 ν 2 ν n Y Y Y be the middle and lower parts of another subindividual in the J th subpopulation. First, prepare copies X and Y of X and Y , respectively. Step 1. Choose two crossover points at random on these strings, say, h and k h<k. Step 2. Set i : h and repeat the following procedures. J J J J T J a Find i such that ν i ν i. Then, interchange ν i,y  with ν i , X Y X X ν i y  and set i : i  1. ν i b If i>k, stop. Otherwise, return to a. Step 3. Replace the part from h to k of X with that of Y and let X be the offspring of X. This procedure is carried out for Y and X in the same manner, as shown in Figure 6. It is considered that mutation plays the role of local random search in genetic algorithms. Only for the lower string of a triple string, mutation of bit-reverse type is adopted and applied to every subindividual. For the upper string and for the middle and lower string of the triple string, inversion defined by the following algorithm is adopted Step 1. After determining two inversion points h and k h<k, pick out the part of the string from h to k. Step 2. Arrange the substring in reverse order. Step 3. Put the arranged substring back in the string. Figure 7 illustrates examples of mutation. Now we are ready to introduce the genetic algorithm with decomposition procedures as an approximate solution method for nonlinear integer programming problems with block angular structures. The outline of procedures is shown in Figure 8. Advances in Operations Research 11 62 3 7 4 5 1 X Y 75 4 2 6 1 3 hk hk X 62 3 7 4 5 1 Y 75 4 2 6 1 3 Y 75 4 2 61 3 X 62 3 7 4 5 1 24 7 3 5 1 75 3 2 6 1 4 X 6 Y 75 42 6 1 3 62 37 45 1 Y X 67 4 2 3 5 1 25 3 7 6 1 4 X Y Y 75 4 2 6 1 3 X 62 37 45 1 3 7 42 65 1 25 3 7 4 1 6 X Y Y 7 542 6 1 3 X 62 37 45 1 Figure 5: An example of PMX for upper string. 4.4. Computational Procedures Step 1. Set an iteration index generation t  0 and determine the parameter values for the population size N, the probability of crossover p , the probability of mutation p ,the c m probability of inversion p , variances σ , τ , the minimal search generation I and the maximal i min search generation I . max Step 2. Generate N individuals whose subindividuals are of triple string type at random. Step 3. Evaluate each individual subindividual on the basis of phenotype obtained by the decoding algorithm and calculate the mean fitness f and the maximal fitness f of the mean max population. If t>I and f −f /f <ε,or, if t>I , regard an individual with the min max mean max max maximal fitness as an optimal individual and terminate this program. Otherwise, set t  t  1 and proceed to Step 4. Step 4. Apply the reproduction operator to all subpopulations {s | n  1, 2,...,N}, J 1, 2,...,P . Step 5. Apply the PMX for double strings to the middle and lower part of every subindividual according to the probability of crossover p . Step 6. Apply the mutation operator of the bit-reverse type to the lower part of every subindividual according to the probability of mutation p and apply the inversion operator for the middle and lower parts of every subindividual according to the probability of inversion p . i 12 Advances in Operations Research 57 1 3 4 6 2 31 6 5 7 2 4 X Y 63 0 4 8 2 3 05 4 2 0 9 3 hk h k 57 1 3 4 6 2 31 6 5 7 2 4 X Y 63 0 4 8 2 3 05 4 2 0 9 3 3 16 5 7 2 4 5 71 3 4 6 2 Y X 0 54 2 0 9 3 6 30 4 8 2 3 3 61 5 7 2 4 5 76 3 4 1 2 X Y 0 45 2 0 9 3 6 3 2 4 8 0 3 3 16 5 7 2 4 57 1 3 4 6 2 Y X 05 4 2 0 9 3 6 3 0 4 8 2 3 3 76 5 4 1 2 5 61 3 7 2 4 X Y 4 3 2 6 8 0 3 24 5 0 0 9 3 3 16 5 7 2 4 5 71 3 4 6 2 Y X 0 54 2 0 9 3 6 30 4 8 2 3 3 76 5 4 1 2 56 1 3 7 2 4 X Y 43 4 2 83 0 2 40 4 0 9 3 Figure 6: An example of PMX for double string. Mutation Inversion hk 41 5 7 6 2 3 41 5 7 6 2 3 X X 25 0 9 3 2 1 25 0 9 3 2 1 41 5 7 6 2 3 42 6 7 5 1 3 X X 26 0 9 3 2 1 22 3 9 0 5 1 Figure 7: Examples of mutation. Step 7. Apply the PMX for upper strings according to p . Step 8. Apply the inversion operator for upper strings according to p and return to Step 3. It should be noted here that, in the algorithm, the operations in the Steps 4, 5, and 6 can be applied to every subindividual of all individuals independently. As a result, it is theoretically possible to reduce the amount of working memory needed to solve the problem and carry out parallel processing. Advances in Operations Research 13 1 7 59 ··· 2 73 ··· 6 ··· 34 ··· 1 ··· ··· ··· 10 1 00 0 01 1 2 5 8 28 ··· 3 63 ··· 1 57 ··· 8 ··· 10 ··· 0 10 ··· 1 11 ··· 0 6 2 5 91 ··· 8 89 ··· 3 ··· 64 ··· 2 ··· ··· ··· 00 1 01 0 01 0 Evaluation 4 1 7 59 ··· 2 73 ··· 6 34 ··· 1 ··· 10 ··· 1 00 ··· 0 01 ··· 1 2 8 28 ··· 3 63 ··· 1 57 ··· 8 ··· 10 1 10 ··· 0 ··· 11 ··· 0 . . . . . . . . . . . . 6 2 5 91 ··· 8 89 ··· 3 64 ··· 2 ··· 00 ··· 1 01 ··· 0 01 ··· 0 Reproduction, crossover, mutation Figure 8: The outline of procedures. Table 1: The whole process of interaction. Interaction 1st 2nd 3rd μ 111 μ 1 0.900 0.900 μ 1 1 0.900 μ f x 0.496 0.552 0.554 1 1 μ f x 0.497 0.450 0.474 2 2 μ f x 0.491 0.558 0.524 3 3 f x 1500050 1335423 1326906 f x −1629427 −1475077 −1553513 f x 158226 86012 123127 Computation time sec GADPCRRSU proposed method 26.7 32.8 24.2 GADSCRRSU no decomposition 539.6 584.7 503.3 14 Advances in Operations Research 10 20 30 40 50 Number of decision variables GADSCRRSU GADPCRRSU Figure 9: The comparison of computation time. 5. Numerical Examples In order to demonstrate the feasibility and efficiency of the proposed method, consider the following multiobjective quadratic integer programming problem with block-angular structures: J J J J J minimize f x  c x  x C x ,l  1, 2,...,k, l l J1 J J J J J 0 subject to g x  − a x  x A x  b ≤ 0,i  1, 2,...,m , i 0 i i i J1 5.1 J J J J J J J J h x  − d x  x D x  b ≤ 0, i i i i J  1, 2,...,P, i  1, 2,...,m , J J x ∈ 0, 1,...,V ,J  1, 2,...,P, j  1, 2,...,n , j j For comparison, genetic algorithms with double strings using continuous relaxation based on reference solution updating GADSCRRSU24 are also adopted. It is significant to note here that decomposition procedures are not involved in GADSCRRSU. For this problem, we set k  3, P  5, n  n  ···  n  10, m  2and m  m 1 2 5 0 1 2 J J J J J J J ···  m  5, V  30, J  1, 2,..., 5, j  1, 2,..., 10. Elements of c ,C ,a ,A ,d and D in j l l i i i i objectives and constraints of the above problem are determined by uniform random number on −100, 100, and those of b in constraints are determined so that the feasible region is not empty. Numerical experiments are performed on a personal computer CPU: Intel Celeron Processor, 900 MHz, Memory: 256 MB, C Compiler: Microsoft Visual C 6.0. Computation time s Advances in Operations Research 15 Parameter values of GADPCRRSU are set as: population size N  100, crossover rate p  0.9, mutation rate p  0.05, inversion rate p  0.05, variances σ  2.0, τ  3.0, minimal c m i search generation number I  500, and maximal search generation number I  1000. min max In this numerical example, for the sake of simplicity, the linear membership function 1,f x <f , l l,1 f x − f l l,0 μ f x  5.2 l l ,f ≤ f x ≤ f , l,1 l l,0 ⎪ f − f l,1 l,0 0,f x >f l l,0 is adopted, and the parameter values are determined as 30 f  f  f x  minf x,l  1, 2,...,k, l,1 l,min l l min x∈X 5.3 1 l−1 l1 k f  max f x ,...,f x ,f x ,...,f x ,l  1, 2,...,k. l,0 l l l l min min min min For the initial reference levels 1, 1, 1, the augmented minimax problem 3.3 is solved. The obtained solutions are shown at the second column in Table 1. Assume that the hypothetical decision maker is not satisfied with the current solution and he feels that μ f x and μ f x should be improved at the expense of μ f x. Then, the decision 1 1 3 3 2 2 maker updates the reference membership levels to 1, 0.9000, 1. The result for the updated reference membership levels is shown at the third column in Table 1. Since the decision maker is not satisfied with the current solution, he updates the reference membership levels to 1, 0.900, 0.900 for obtaining better value of μ f x. A similar procedure continues in this 1 1 way and, in this example, a satisficing solution for the decision maker is derived at the third interaction. Table 1 shows that the proposed interactive method using GADPCRRSU with decomposition procedures can find an approximate optimal solution at each interaction in shorter time than that using GADSCRRSU without decomposition procedures. Furthermore, in order to see how the computation time changes with the increased size of block-angular nonlinear integer programming problems, typical problems with 10, 20, 30, 40, and 50 variables are solved by GADPCRRSU and GADSCRRSU. As depicted in Figure 9, it can be seen that the computation time of the proposed GADPCRRSU increases almost linearly with the size of the problem while that of GADSCRRSU increases rapidly and nonlinearly. 6. Conclusions In this paper, as a typical mathematical model of large-scale discrete systems optimization, we considered multiobjective nonlinear integer programming with block-angular structures. Taking into account vagueness of judgments of the decision makers, fuzzy goals of the decision maker were introduced, and the problem was interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. An interactive fuzzy satisficing method was developed for deriving a satisficing solution for the decision maker. Realizing the block-angular structures that can be exploited, we also propose genetic algorithms 16 Advances in Operations Research with decomposition procedures for solving nonlinear integer programming problems with block-angular structures. Illustrative numerical examples were provided to demonstrate the feasibility and efficiency of the proposed method. Extensions to multiobjective two-level integer programming problems with block-angular structures will be considered elsewhere. Also extensions to stochastic multiobjective two-level integer programming problems with block-angular structures will be required in the near future. References 1 J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, University of Michigan Press, Ann Arbor, Mich, USA, 1975. 2 D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison Wesley, Reading, Mass, USA, 1989. 3 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Artificial Intelligence, Springer, Berlin, Germany, 1992. 4 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 2nd edition, 1994. 5 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 3rd edition, 1996. 6 T. Back, ¨ Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, The Clarendon Press, Oxford University Press, New York, NY, USA, 1996. 7 T. Back, ¨ D. B. Fogel, and Z. Michalewicz, Handbook of Evolutionary Computation, Institute of Physics, Bristol, UK, 1997. 8 K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, Wiley-Interscience Series in Systems and Optimization, John Wiley & Sons, Chichester, UK, 2001. 9 M. Sakawa, Large Scale Interactive Fuzzy Multiobjective Programming, Physica, Heidelberg, Germany, 10 M. Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimization, vol. 14 of Operations Research/Computer Science Interfaces Series, Kluwer Academic Publishers, Boston, Mass, USA, 2002. 11 C. A. C. Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary Algorithms for Solving Multi- Objective Problems, Kluwer Academic Publishers, New York, NY, USA, 2002. 12 A. E. Eiben and J. E. Smith, Introduction to Evolutionary Computing, Natural Computing Series, Springer, Berlin, Germany, 2003. 13 Z. Hua and F. Huang, “A variable-grouping based genetic algorithm for large-scale integer programming,” Information Sciences, vol. 176, no. 19, pp. 2869–2885, 2006. 14 M. Sakawa, K. Kato, H. Sunada, and T. Shibano, “Fuzzy programming for multiobjective 0-1 programming problems through revised genetic algorithms,” European Journal of Operational Research, vol. 97, pp. 149–158, 1997. 15 M. Sakawa, K. Kato, S. Ushiro, and K. Ooura, “Fuzzy programming for general multiobjective 0- 1 programming problems through genetic algorithms with double strings,” in Proceedings of IEEE International Fuzzy Systems Conference, vol. 3, pp. 1522–1527, 1999. 16 M. Sakawa, K. Kato, T. Shibano, and K. Hirose, “Fuzzy multiobjective integer programs through genetic algorithms using double string representation and information about solutions of continuous relaxation problems,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 967–972, 1999. 17 M. Sakawa and K. Kato, “Integer programming through genetic algorithms with double strings based on reference solution updating,” in Proceedings of IEEE International Conference on Industrial Electronics, Control and Instrumentation, pp. 2915–2920, 2000. 18 P. Hansen, “Quadratic zero-one programming by implicit enumeration,” in Numerical Methods for Non-Linear Optimization (Conf., Univ. Dundee, Dundee, 1971), F. A. Lootsma, Ed., pp. 265–278, Academic Press, London, UK, 1972. 19 J. Li, “A bound heuristic algorithm for solving reliability redundancy optimization,” Microelectronics and Reliability, vol. 36, pp. 335–339, 1996. 20 D. Li, J. Wang, and X. L. Sun, “Computing exact solution to nonlinear integer programming: convergent Lagrangian and objective level cut method,” Journal of Global Optimization, vol. 39, no. 1, pp. 127–154, 2007. Advances in Operations Research 17 21 R. H. Nickel, I. Mikolic-Torreira, and J. W. Tolle, “Computing aviation sparing policies: solving a large nonlinear integer program,” Computational Optimization and Applications, vol. 35, no. 1, pp. 109–126, 22 M. S. Sabbagh, “A partial enumeration algorithm for pure nonlinear integer programming,” Applied Mathematical Modelling, vol. 32, no. 12, pp. 2560–2569, 2008. 23 W. Zhu and H. Fan, “A discrete dynamic convexized method for nonlinear integer programming,” Journal of Computational and Applied Mathematics, vol. 223, no. 1, pp. 356–373, 2009. 24 M. Sakawa, K. Kato, M. A. K. Azad, and R. Watanabe, “A genetic algorithm with double string for nonlinear integer programming problems,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 3281–3286, 2005. 25 L. S. Lasdon, Optimization Theory for Large Systems, The Macmillian, New York, NY, USA, 1970. 26 K. Kato and M. Sakawa, “Genetic algorithms with decomposition procedures for multidimensional 0-1 knapsack problems with block angular structures,” IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 33, pp. 410–419, 2003. 27 K. M. Bretthauer, B. Shetty, and S. Syam, “A specially structured nonlinear integer resource allocation problem,” Naval Research Logistics, vol. 50, no. 7, pp. 770–792, 2003. 28 M. Sakawa, H. Yano, and T. Yumine, “An interactive fuzzy satisficing method for multiobjective linear-programming problems and its application,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 17, no. 4, pp. 654–661, 1987. 29 M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization, Applied Information Technology, Plenum Press, New York, NY, USA, 1993. 30 H.-J. Zimmermann, “Fuzzy programming and linear programming with several objective functions,” Fuzzy Sets and Systems, vol. 1, no. 1, pp. 45–55, 1978. Advances in Advances in Journal of Journal of Operations Research Decision Sciences Applied Mathematics Algebra Probability and Statistics Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 The Scientific International Journal of World Journal Die ff rential Equations Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Mathematical Physics Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Journal of Journal of Abstract and Discrete Dynamics in Mathematical Problems Complex Analysis Mathematics in Engineering Applied Analysis Nature and Society Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 International Journal of Journal of Mathematics and Discrete Mathematics Mathematical Sciences Journal of International Journal of Journal of Function Spaces Stochastic Analysis Optimization Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014 Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Advances in Operations Research Hindawi Publishing Corporation

An Interactive Fuzzy Satisficing Method for Multiobjective Nonlinear Integer Programming Problems with Block-Angular Structures through Genetic Algorithms with Decomposition Procedures

Loading next page...
 
/lp/hindawi-publishing-corporation/an-interactive-fuzzy-satisficing-method-for-multiobjective-nonlinear-HqySNhDHbs
Publisher
Hindawi Publishing Corporation
Copyright
Copyright © 2009 Masatoshi Sakawa and Kosuke Kato. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
ISSN
1687-9147
eISSN
1687-9155
DOI
10.1155/2009/372548
Publisher site
See Article on Publisher Site

Abstract

Hindawi Publishing Corporation Advances in Operations Research Volume 2009, Article ID 372548, 17 pages doi:10.1155/2009/372548 Research Article An Interactive Fuzzy Satisficing Method for Multiobjective Nonlinear Integer Programming Problems with Block-Angular Structures through Genetic Algorithms with Decomposition Procedures Masatoshi Sakawa and Kosuke Kato Department of Artificial Complex Systems Engineering, Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima 739-8527, Japan Correspondence should be addressed to Masatoshi Sakawa, sakawa@hiroshima-u.ac.jp Received 19 August 2008; Revised 27 March 2009; Accepted 29 May 2009 Recommended by Walter J. Gutjahr We focus on multiobjective nonlinear integer programming problems with block-angular structures which are often seen as a mathematical model of large-scale discrete systems optimization. By considering the vague nature of the decision maker ’s judgments, fuzzy goals of the decision maker are introduced, and the problem is interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. For deriving a satisficing solution for the decision maker, we develop an interactive fuzzy satisficing method. Realizing the block-angular structures that can be exploited in solving problems, we also propose genetic algorithms with decomposition procedures. Illustrative numerical examples are provided to demonstrate the feasibility and efficiency of the proposed method. Copyright q 2009 M. Sakawa and K. Kato. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 1. Introduction Genetic algorithms GAs1, initiated by Holland, his colleagues, and his students at the University of Michigan in the 1970s, as stochastic search techniques based on the mechanism of natural selection and natural genetics, have received a great deal of attention regarding their potential as optimization techniques for solving discrete optimization problems or other hard optimization problems. Although genetic algorithms were not much known at the beginning, after the publication of Goldberg’s book 2, genetic algorithms have recently attracted considerable attention in a number of fields as a methodology for optimization, adaptation, and learning. As we look at recent applications of genetic algorithms to 2 Advances in Operations Research optimization problems, especially to various kinds of single-objective discrete optimization problems and/or to other hard optimization problems, we can see continuing advances 3– 13. Sakawa et al. proposed genetic algorithms with double strings GADS14 for obtaining an approximate optimal solution to multiobjective multidimensional 0-1 knapsack problems. They also proposed genetic algorithms with double strings based on reference solution updating GADSRSU15 for multiobjective general 0-1 programming problems involving both positive coefficients and negative ones. Furthermore, they proposed genetic algorithms with double strings using linear programming relaxation GADSLPR16 for multiobjective multidimensional integer knapsack problems and genetic algorithms with double strings using linear programming relaxation based on reference solution updating GADSLPRRSU for linear integer programming problems 17. Observing that some solution methods for specialized types of nonlinear integer programming problems have been proposed 18–23, as an approximate solution method for general nonlinear integer programming problems, Sakawa et al. 24 proposed genetic algorithms with double strings using continuous relaxation based on reference solution updating GADSCRRSU. In general, however, actual decision making problems formulated as mathematical programming problems involve very large numbers of variables and constraints. Most of such large-scale problems in the real world often have special structures that can be exploited in solving problems. One familiar special structure is the block-angular structure to the constraints and several kinds of decomposition methods for linear and nonlinear programming problems with block-angular structure have been proposed 25. Unfortunately, however, for large-scale problems with discrete variables, it seems quite difficult to develop an efficient solution method for obtaining an exact optimal solution. For multidimensional 0-1 knapsack problems with block-angular structures, by utilizing the block-angular structures that can be exploited in solving problems, Sakawa et al. 9, 26 proposed genetic algorithms with decomposition procedures GADPs. For dealing with multidimensional 0-1 knapsack problems with block angular structures, using triple string representation, Sakawa et al. 9, 26 presented genetic algorithms with decomposition procedures. Furthermore, by incorporating the fuzzy goals of the decision maker, they 9 also proposed an interactive fuzzy satisficing method for multiobjective multidimensional 0-1 knapsack problems with block angular structures. Under these circumstances, in this paper, as a typical mathematical model of large- scale multiobjective discrete systems optimization, we consider multiobjective nonlinear integer programming problems with block-angular structures. By considering the vague nature of the decision maker ’s judgments, fuzzy goals of the decision maker are introduced, and the problem is interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. For deriving a satisficing solution for the decision maker, we develop an interactive fuzzy satisficing method. Realizing the block-angular structures that can be exploited in solving problems, we also propose genetic algorithms with decomposition procedures for nonlinear integer programming problems with block-angular structures. The paper is organized as follows. Section 2 formulates multiobjective nonlinear integer programming problems with block-angular structures. Section 3 develops an interactive fuzzy satisficing method for deriving a satisficing solution for the decision maker. Section 4 proposes GADPCRRSU as an approximate solution method for nonlinear integer programming problems with block-angular structures. Section 5 provides illustrative numerical examples to demonstrate the feasibility and efficiency of the proposed method. Finally the conclusions are considered in Section 6 and the references. Advances in Operations Research 3 2. Problem Formulation Consider multiobjective nonlinear integer programming problems with block-angular struct- ures formulated as 1 P minimize f x  f x ,..., x ,l  1, 2,...,k l l 1 P subject to gx  g x ,..., x ≤ 0 1 1 h x ≤ 0 2.1 . . P P h x ≤ 0 J J x ∈ 0, 1,...,V ,J  1, 2,...,P, j  1, 2,...,n , j j where x , J  1, 2,...,P,are n dimensional integer decision variable column vectors and T T T T 1 P x x  ,..., x   . The constraints gxg x,...,g x ≤ 0 are called as coupling 1 m J J J J 1 J constraints with m dimension, while each of constraints h x h x ,...,h x  ≤ 0, 0 m J  1, 2,...,P, is called as block constraints with m dimension. In 2.1, it is assumed that J J f ·, g ·, h · are general nonlinear functions. The positive integers V , J  1, 2,...,P , l i i j j  1, 2,...,n , represent upper bounds for x . In the following, for notational convenience, the feasible region of 2.1 is denoted by X. As an example of nonlinear integer programming problems with block-angular structures in practical applications, Bretthauer et al. 27 formulated health care capacity planning, resource constrained production planning, and portfolio optimization with industry constraints. 3. An Interactive Fuzzy Satisficing Method In order to consider the vague nature of the decision maker ’s judgments for each objective function in 2.1, if we introduce the fuzzy goals such as “f x should be substantially less than or equal to a certain value,” 2.1 can be rewritten as maximize μ f x ,...,μ f x , 3.1 1 1 k k x∈X where μ · is the membership function to quantify the fuzzy goal for the lth objective function in 2.1. To be more specific, if the decision maker feels that f x should be less than or equal 0 1 0 to at least f and f x ≤ f ≤ f  is satisfactory, the shape of a typical membership function l l l is shown in Figure 1. Since 3.1 is regarded as a fuzzy multiobjective optimization problem, a complete optimal solution that simultaneously minimizes all of the multiple objective functions 4 Advances in Operations Research μ f x l l 1 0 0 f f f x l l Figure 1: An example of membership functions. does not always exist when the objective functions conflict with each other. Thus, instead of a complete optimal solution, as a natural extension of the Pareto optimality concept for ordinary multiobjective programming problems, Sakawa et al. 28, 29 introduced the concept of M-Pareto optimal solutions which is defined in terms of membership functions instead of objective functions, where M refers to membership. Definition 3.1 M-Pareto optimality. A feasible solution x ∈ X is said to be M-Pareto optimal to a fuzzy multiobjective optimization problem if and only if there does not exist another ∗ ∗ feasible solution x ∈ X such as μ f x ≥ μ f x , l  1, 2,...,k, and μ f x >μ f x l l l l j j j j for at least one j ∈{1, 2,...,k}. Introducing an aggregation function μ x for k membership functions in 3.1,the problem can be rewritten as maximize μ x, 3.2 x∈X where the aggregation function μ · represents the degree of satisfaction or preference of the decision maker for the whole of k fuzzy goals. In the conventional fuzzy approaches, it has been implicitly assumed that the minimum operator is the proper representation of the decision maker ’s fuzzy preferences. However, it should be emphasized here that this approach is preferable only when the decision maker feels that the minimum operator is appropriate. In other words, in general decision situations, the decision maker does not always use the minimum operator when combining the fuzzy goals and/or constraints. Probably the most crucial problem in 3.2 is the identification of an appropriate aggregation function which well represents the decision maker ’s fuzzy preferences. If μ · can be explicitly identified, then 3.2 reduces to a standard mathematical programming problem. However, this rarely happens, and as an alternative, an interaction with the decision maker is necessary to find a satisficing solution for 3.1. In order to generate candidates of a satisficing solution which are M-Pareto optimal, the decision maker is asked to specify the aspiration levels of achievement for all membership functions, called reference membership levels. For reference membership levels given by the decision maker μ , l  1, 2,...,k, the corresponding M-Pareto optimal solution to µ, which is the nearest to the requirements in the minimax sense or better than that if the reference Advances in Operations Research 5 membership levels are attainable, is obtained by solving the following augmented minimax problem: ⎧ ⎫ ⎨ k ⎬ minimize max μ − μ f x  ρ μ − μ f x , 3.3 l l j j l j ⎩ ⎭ x∈X l1,2,...,k j1 where ρ is a sufficiently small positive real number. We can now construct an interactive algorithm in order to derive a satisficing solution for the decision maker from among the M-Pareto optimal solution set. The procedure of the interactive fuzzy satisficing method is summarized as follows. 3.1. An Interactive Fuzzy Satisficing Method Step 1. Calculate the individual minimum and maximum of each objective function under the given constraints by solving the following problems: minimize f x,l  1, 2,...,k, x∈X 3.4 maximize f x,l  1, 2,...,k. x∈X Step 2. By considering the individual minimum and maximum of each objective function, the decision maker subjectively specifies membership functions μ f x, l  1, 2,...,k, to l l quantify fuzzy goals for objective functions. Step 3. The decision maker sets initial reference membership levels μ , l  1, 2,...,k. Step 4. For the current reference membership levels, solve the augmented minimax problem 3.3 to obtain the M-Pareto optimal solution and the membership function value. Step 5. If the decision maker is satisfied with the current levels of the M-Pareto optimal solution, stop. Then the current M-Pareto optimal solution is the satisficing solution of the decision maker. Otherwise, ask the decision maker to update the current reference membership levels μ , l  1, 2,...,k, by considering the current values of the membership functions and return to Step 4. In the interactive fuzzy satisficing method, it is required to solve nonlinear integer programming problems with block-angular structures 3.3 together with 3.4.Itis significant to note that these problems are single objective integer programming problems with block-angular structures. Realizing this difficulty, in the next section, we propose genetic algorithms with decomposition procedures using continuous relaxation based on reference solution updating GADPCRRSU. 4. Genetic Algorithms with Decomposition Procedures As discussed above, in this section, we propose genetic algorithms with decomposition pro- cedures using continuous relaxation based on reference solution updating GADPCRRSU 6 Advances in Operations Research ν1 ν2 ··· νn y y y ν1 ν2 ··· νn Figure 2: Double string. as an approximate solution method for nonlinear integer programming problems with block- angular structures. Consider single-objective nonlinear integer programming problems with block- angular structures formulated as 1 P minimize fx  f x ,..., x 1 P subject to gx  g x ,..., x ≤ 0 1 1 h x ≤ 0 . 4.1 . . P P h x ≤ 0 J J x ∈ 0, 1,...,V ,J  1, 2,...,P, j  1, 2,...,n . j j Observe that this problem can be viewed as a single-objective version of the original problem 2.1. Sakawa et al. 24 have already studied genetic algorithms with double strings using continuous relaxation based on reference solution updating GADSCRRSU for ordinary nonlinear integer programming problems formulated as minimize fx subject to g x ≤ 0,i  1, 2,...,m, i 4.2 x ∈ 0, 1,...,V ,j  1, 2,...,n, j j where an individual is represented by a double string. In a double string as is shown in Figure 2, for a certain j , νj ∈{1, 2,...,n} represents an index of a variable in the solution space, while y , j  1, 2,...,n, does the value among {0, 1,...,V } of the νjth variable νj νj x . νj In view of the block-angular structure of 4.1, it seems to be quite reasonable to define an individual S as an aggregation of P subindividuals s , J  1, 2,...,P , corresponding to the J J block constraint h x  ≤ 0 as shown in Figure 3. If these subindividuals are represented by double strings, for each of subindividuals s , J  1, 2,...,P , a phenotype subsolution satisfying each of the block constraints can be obtained by the decoding algorithm in GADSCRRSU. Unfortunately, however, the simple combination of these subsolutions does not always satisfy the coupling constraints gx ≤ 0. To cope with this problem, a triple string representation as shown in Figure 4 and the corresponding decoding algorithm are presented as an extension of the double string representation and the corresponding decoding Advances in Operations Research 7 1st block 2nd block P th block n n n 1 2 P 0 1 ··· 00 0 ··· 1 ··· 1 1 ··· 0 ··· ··· ··· 01 0 00 1 1 1 0 Subindividual Subindividual Subindividual 1 2 P s s s Figure 3: Division of an individual into P subindividuals. J J J J S  ν 1 ν 2 ··· ν nJ J J J y y ··· y J J J ν 1 ν 2 ν nJ Figure 4: Triple string. algorithm. By using the proposed representation and decoding algorithm, a phenotype solution satisfying both the block constraints and coupling constraints can be obtained for 1 2 P each individual S s , s ,..., s . To be more specific, in a triple string which represents a subindividual corresponding J J to the J th block, r ∈{1, 2,...,P} represents the priority of the J th block, each ν j ∈ {1, 2,...,n } is an index of a variable in phenotype and each y takes an integer value ν j among {0, 1,...,V }. As in GADSCRRSU, a feasible solution, called a reference solution, is ν j necessary for decoding of triple strings. In our proposed GADPCRRSU, the reference solution is obtained as a solution x to a minimization problem of constraint violation. In the following, we summarize the decoding algorithm for triple strings using a reference solution x , where N is the number of individuals and I is a counter for the individual number. 4.1. Decoding Algorithm for Triple String Step 1. Let I : 1. Step 2. If 1 ≤ I ≤N/2,goto Step 3. Otherwise, go to Step 11. Step 3. Let x : 0, r : 1, L : 0. Step 4. Find J ∈{1, 2,...,P} such that r  r.Let j : 1, l : 0. J J Step 5. Let x : y . J J v j ν j J J Step 6. If gx ≤ 0 and h x  ≤ 0,let L : r , l : j , j : j  1and go to Step 7. Otherwise, let j : j  1and go to Step 7. Step 7. If j> n ,let r : r  1and go to Step 8. Otherwise, go to Step 5. Step 8. If r> P,goto Step 9. Otherwise, go to Step 4. 8 Advances in Operations Research Step 9. If L  0and l  0, go to Step 11. Otherwise, go to Step 10. Jr Jr Jr Step 10. Find Jr such that r  r for r  1,...,L − 1. Then, let x : y , j Jr Jr ν j ν j JL JL JL 1, 2,...,n . Furthermore, find JL such that r  L and let x : y , j  1, 2,...,l. Jr JL JL ν j ν j The remainder elements of x are set to 0. Terminate the decoding process. Step 11. Let x : x , r : 1and go to Step 12. Step 12. Find J ∈{1, 2,...,P} such that r  r and let j : 1. J J J ∗J J ∗J Step 13. Let x : y .If y  x ,let j : j  1and go to Step 15.If y  x ,go J J J J J J ν j ν j ν j ν j ν j ν j to Step 14. J ∗J J J Step 14. If gx ≤ 0 and h x  ≤ 0,let j : j  1and go to Step 15. Otherwise, let x : x , J J ν j ν j j : j  1and go to Step 15. Step 15. If j ≤ n ,goto Step 13. Otherwise, let r : r  1and go to Step 16. Step 16. If r ≤ P,goto Step 12. Otherwise, I : I  1and go to Step 17. Step 17. If I ≤ N,goto Step 2. Otherwise, terminate the decoding process. It is expected that an optimal solution to the continuous relaxation problem becomes a good approximate optimal solution of the original nonlinear integer programming problem. In the proposed method, after obtaining an approximate optimal solution x , J  1, 2,...,P , j  1, 2,...,n to the continuous relaxation problem, we suppose that each decision variable J J x takes exactly or approximately the same value that x does. In particular, decision variables j j J J x such as x  0 are very likely to be equal to 0. j j To be more specific, the information of the approximate optimal solution x to the continuous relaxation problem of 4.1 is used when generating the initial population and performing mutation. In order to generate the initial population, when we determine the value of each y in the lowest row of a triple string, we use a Gaussian random variable ν j J J with mean x and variance σ . In mutation, when we change the value of y for some J J ν j ν j J, j, we also use a Gaussian random variable with mean x and variance τ . ν j Various kinds of reproduction methods have been proposed. Among them, Sakawa et al. 14 investigated the performance of each of six reproduction operators, that is, ranking selection, elitist ranking selection, expected value selection, elitist expected value selection, roulette wheel selection, and elitist roulette wheel selection, and as a result confirmed that elitist expected value selection is relatively efficient for multiobjective 0-1 programming problems incorporating the fuzzy goals of the decision maker. Thereby, the elitist expected value selection—elitism and expected value selection combined together—is adopted. Here, elitism and expected value selection are summarized as follows. Elitism If the fitness of an individual in the past populations is larger than that of every individual in the current population, preserve this string into the current generation. Advances in Operations Research 9 Expected Value Selection For a population consisting of N individuals, the expected number of each s , J  1, 2,...,P , each subindividual of the nth individual S , in the next population, is given by fS N  × N. 4.3 fS n1 Then, the integral part of N  N  denotes the definite number of s preserved in the next n n n population. While, using the decimal part of N  N −N , the probability to preserve s , n n n J  1, 2,...,P , in the next population is determined by N − N n n . 4.4 N − N n n n1 If a single-point crossover or multipoint crossover is directly applied to upper or middle string of individuals of triple string type, the kth element of the string of an offspring may take the same number that the k th element takes. The same violation occurs in solving the traveling salesman problems or scheduling problems through genetic algorithms. In order to avoid this violation, a crossover method called partially matched crossover PMX is modified to be suitable for triple strings. PMX is applied as usual for upper strings, whereas, for a couple of middle string and lower string, PMX for double strings 14 is applied to every subindividual. It is now appropriate to present the detailed procedures of the crossover method for triple strings. 4.2. Partially Matched Crossover (PMX) for Upper String Let 1 2 P X  r ,r ,..., r 4.5 X X X be the upper string of an individual and let 1 2 P Y  r ,r ,..., r 4.6 Y Y Y be the upper string of another individual. Prepare copies X and Y of X and Y , respectively. Step 1. Choose two crossover points at random on these strings, say, h and k h<k. Step 2. Set i : h and repeat the following procedures. J J i i a Find J such that r  r . Then, interchange r with r and set i : i  1. X Y X X b If i>k,stopand let X be the offspring of X. Otherwise, return to a. Step 2 is carried out for Y in the same manner, as shown in Figure 5. 10 Advances in Operations Research 4.3. Partially Matched Crossover (PMX) for Double String Let J J J ν 1 ,ν 2 ,...,ν n X X X X  4.7 J J J y ,y ,..., y J J J ν 1 ν 2 ν n X X X be the middle and lower part of a subindividual in the J th subpopulation, and J J J ν 1 ,ν 2 ,..., ν n Y Y Y Y  4.8 J J J y ,y ,..., y J J J ν 1 ν 2 ν n Y Y Y be the middle and lower parts of another subindividual in the J th subpopulation. First, prepare copies X and Y of X and Y , respectively. Step 1. Choose two crossover points at random on these strings, say, h and k h<k. Step 2. Set i : h and repeat the following procedures. J J J J T J a Find i such that ν i ν i. Then, interchange ν i,y  with ν i , X Y X X ν i y  and set i : i  1. ν i b If i>k, stop. Otherwise, return to a. Step 3. Replace the part from h to k of X with that of Y and let X be the offspring of X. This procedure is carried out for Y and X in the same manner, as shown in Figure 6. It is considered that mutation plays the role of local random search in genetic algorithms. Only for the lower string of a triple string, mutation of bit-reverse type is adopted and applied to every subindividual. For the upper string and for the middle and lower string of the triple string, inversion defined by the following algorithm is adopted Step 1. After determining two inversion points h and k h<k, pick out the part of the string from h to k. Step 2. Arrange the substring in reverse order. Step 3. Put the arranged substring back in the string. Figure 7 illustrates examples of mutation. Now we are ready to introduce the genetic algorithm with decomposition procedures as an approximate solution method for nonlinear integer programming problems with block angular structures. The outline of procedures is shown in Figure 8. Advances in Operations Research 11 62 3 7 4 5 1 X Y 75 4 2 6 1 3 hk hk X 62 3 7 4 5 1 Y 75 4 2 6 1 3 Y 75 4 2 61 3 X 62 3 7 4 5 1 24 7 3 5 1 75 3 2 6 1 4 X 6 Y 75 42 6 1 3 62 37 45 1 Y X 67 4 2 3 5 1 25 3 7 6 1 4 X Y Y 75 4 2 6 1 3 X 62 37 45 1 3 7 42 65 1 25 3 7 4 1 6 X Y Y 7 542 6 1 3 X 62 37 45 1 Figure 5: An example of PMX for upper string. 4.4. Computational Procedures Step 1. Set an iteration index generation t  0 and determine the parameter values for the population size N, the probability of crossover p , the probability of mutation p ,the c m probability of inversion p , variances σ , τ , the minimal search generation I and the maximal i min search generation I . max Step 2. Generate N individuals whose subindividuals are of triple string type at random. Step 3. Evaluate each individual subindividual on the basis of phenotype obtained by the decoding algorithm and calculate the mean fitness f and the maximal fitness f of the mean max population. If t>I and f −f /f <ε,or, if t>I , regard an individual with the min max mean max max maximal fitness as an optimal individual and terminate this program. Otherwise, set t  t  1 and proceed to Step 4. Step 4. Apply the reproduction operator to all subpopulations {s | n  1, 2,...,N}, J 1, 2,...,P . Step 5. Apply the PMX for double strings to the middle and lower part of every subindividual according to the probability of crossover p . Step 6. Apply the mutation operator of the bit-reverse type to the lower part of every subindividual according to the probability of mutation p and apply the inversion operator for the middle and lower parts of every subindividual according to the probability of inversion p . i 12 Advances in Operations Research 57 1 3 4 6 2 31 6 5 7 2 4 X Y 63 0 4 8 2 3 05 4 2 0 9 3 hk h k 57 1 3 4 6 2 31 6 5 7 2 4 X Y 63 0 4 8 2 3 05 4 2 0 9 3 3 16 5 7 2 4 5 71 3 4 6 2 Y X 0 54 2 0 9 3 6 30 4 8 2 3 3 61 5 7 2 4 5 76 3 4 1 2 X Y 0 45 2 0 9 3 6 3 2 4 8 0 3 3 16 5 7 2 4 57 1 3 4 6 2 Y X 05 4 2 0 9 3 6 3 0 4 8 2 3 3 76 5 4 1 2 5 61 3 7 2 4 X Y 4 3 2 6 8 0 3 24 5 0 0 9 3 3 16 5 7 2 4 5 71 3 4 6 2 Y X 0 54 2 0 9 3 6 30 4 8 2 3 3 76 5 4 1 2 56 1 3 7 2 4 X Y 43 4 2 83 0 2 40 4 0 9 3 Figure 6: An example of PMX for double string. Mutation Inversion hk 41 5 7 6 2 3 41 5 7 6 2 3 X X 25 0 9 3 2 1 25 0 9 3 2 1 41 5 7 6 2 3 42 6 7 5 1 3 X X 26 0 9 3 2 1 22 3 9 0 5 1 Figure 7: Examples of mutation. Step 7. Apply the PMX for upper strings according to p . Step 8. Apply the inversion operator for upper strings according to p and return to Step 3. It should be noted here that, in the algorithm, the operations in the Steps 4, 5, and 6 can be applied to every subindividual of all individuals independently. As a result, it is theoretically possible to reduce the amount of working memory needed to solve the problem and carry out parallel processing. Advances in Operations Research 13 1 7 59 ··· 2 73 ··· 6 ··· 34 ··· 1 ··· ··· ··· 10 1 00 0 01 1 2 5 8 28 ··· 3 63 ··· 1 57 ··· 8 ··· 10 ··· 0 10 ··· 1 11 ··· 0 6 2 5 91 ··· 8 89 ··· 3 ··· 64 ··· 2 ··· ··· ··· 00 1 01 0 01 0 Evaluation 4 1 7 59 ··· 2 73 ··· 6 34 ··· 1 ··· 10 ··· 1 00 ··· 0 01 ··· 1 2 8 28 ··· 3 63 ··· 1 57 ··· 8 ··· 10 1 10 ··· 0 ··· 11 ··· 0 . . . . . . . . . . . . 6 2 5 91 ··· 8 89 ··· 3 64 ··· 2 ··· 00 ··· 1 01 ··· 0 01 ··· 0 Reproduction, crossover, mutation Figure 8: The outline of procedures. Table 1: The whole process of interaction. Interaction 1st 2nd 3rd μ 111 μ 1 0.900 0.900 μ 1 1 0.900 μ f x 0.496 0.552 0.554 1 1 μ f x 0.497 0.450 0.474 2 2 μ f x 0.491 0.558 0.524 3 3 f x 1500050 1335423 1326906 f x −1629427 −1475077 −1553513 f x 158226 86012 123127 Computation time sec GADPCRRSU proposed method 26.7 32.8 24.2 GADSCRRSU no decomposition 539.6 584.7 503.3 14 Advances in Operations Research 10 20 30 40 50 Number of decision variables GADSCRRSU GADPCRRSU Figure 9: The comparison of computation time. 5. Numerical Examples In order to demonstrate the feasibility and efficiency of the proposed method, consider the following multiobjective quadratic integer programming problem with block-angular structures: J J J J J minimize f x  c x  x C x ,l  1, 2,...,k, l l J1 J J J J J 0 subject to g x  − a x  x A x  b ≤ 0,i  1, 2,...,m , i 0 i i i J1 5.1 J J J J J J J J h x  − d x  x D x  b ≤ 0, i i i i J  1, 2,...,P, i  1, 2,...,m , J J x ∈ 0, 1,...,V ,J  1, 2,...,P, j  1, 2,...,n , j j For comparison, genetic algorithms with double strings using continuous relaxation based on reference solution updating GADSCRRSU24 are also adopted. It is significant to note here that decomposition procedures are not involved in GADSCRRSU. For this problem, we set k  3, P  5, n  n  ···  n  10, m  2and m  m 1 2 5 0 1 2 J J J J J J J ···  m  5, V  30, J  1, 2,..., 5, j  1, 2,..., 10. Elements of c ,C ,a ,A ,d and D in j l l i i i i objectives and constraints of the above problem are determined by uniform random number on −100, 100, and those of b in constraints are determined so that the feasible region is not empty. Numerical experiments are performed on a personal computer CPU: Intel Celeron Processor, 900 MHz, Memory: 256 MB, C Compiler: Microsoft Visual C 6.0. Computation time s Advances in Operations Research 15 Parameter values of GADPCRRSU are set as: population size N  100, crossover rate p  0.9, mutation rate p  0.05, inversion rate p  0.05, variances σ  2.0, τ  3.0, minimal c m i search generation number I  500, and maximal search generation number I  1000. min max In this numerical example, for the sake of simplicity, the linear membership function 1,f x <f , l l,1 f x − f l l,0 μ f x  5.2 l l ,f ≤ f x ≤ f , l,1 l l,0 ⎪ f − f l,1 l,0 0,f x >f l l,0 is adopted, and the parameter values are determined as 30 f  f  f x  minf x,l  1, 2,...,k, l,1 l,min l l min x∈X 5.3 1 l−1 l1 k f  max f x ,...,f x ,f x ,...,f x ,l  1, 2,...,k. l,0 l l l l min min min min For the initial reference levels 1, 1, 1, the augmented minimax problem 3.3 is solved. The obtained solutions are shown at the second column in Table 1. Assume that the hypothetical decision maker is not satisfied with the current solution and he feels that μ f x and μ f x should be improved at the expense of μ f x. Then, the decision 1 1 3 3 2 2 maker updates the reference membership levels to 1, 0.9000, 1. The result for the updated reference membership levels is shown at the third column in Table 1. Since the decision maker is not satisfied with the current solution, he updates the reference membership levels to 1, 0.900, 0.900 for obtaining better value of μ f x. A similar procedure continues in this 1 1 way and, in this example, a satisficing solution for the decision maker is derived at the third interaction. Table 1 shows that the proposed interactive method using GADPCRRSU with decomposition procedures can find an approximate optimal solution at each interaction in shorter time than that using GADSCRRSU without decomposition procedures. Furthermore, in order to see how the computation time changes with the increased size of block-angular nonlinear integer programming problems, typical problems with 10, 20, 30, 40, and 50 variables are solved by GADPCRRSU and GADSCRRSU. As depicted in Figure 9, it can be seen that the computation time of the proposed GADPCRRSU increases almost linearly with the size of the problem while that of GADSCRRSU increases rapidly and nonlinearly. 6. Conclusions In this paper, as a typical mathematical model of large-scale discrete systems optimization, we considered multiobjective nonlinear integer programming with block-angular structures. Taking into account vagueness of judgments of the decision makers, fuzzy goals of the decision maker were introduced, and the problem was interpreted as maximizing an overall degree of satisfaction with the multiple fuzzy goals. An interactive fuzzy satisficing method was developed for deriving a satisficing solution for the decision maker. Realizing the block-angular structures that can be exploited, we also propose genetic algorithms 16 Advances in Operations Research with decomposition procedures for solving nonlinear integer programming problems with block-angular structures. Illustrative numerical examples were provided to demonstrate the feasibility and efficiency of the proposed method. Extensions to multiobjective two-level integer programming problems with block-angular structures will be considered elsewhere. Also extensions to stochastic multiobjective two-level integer programming problems with block-angular structures will be required in the near future. References 1 J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, University of Michigan Press, Ann Arbor, Mich, USA, 1975. 2 D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison Wesley, Reading, Mass, USA, 1989. 3 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Artificial Intelligence, Springer, Berlin, Germany, 1992. 4 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 2nd edition, 1994. 5 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 3rd edition, 1996. 6 T. Back, ¨ Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, The Clarendon Press, Oxford University Press, New York, NY, USA, 1996. 7 T. Back, ¨ D. B. Fogel, and Z. Michalewicz, Handbook of Evolutionary Computation, Institute of Physics, Bristol, UK, 1997. 8 K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms, Wiley-Interscience Series in Systems and Optimization, John Wiley & Sons, Chichester, UK, 2001. 9 M. Sakawa, Large Scale Interactive Fuzzy Multiobjective Programming, Physica, Heidelberg, Germany, 10 M. Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimization, vol. 14 of Operations Research/Computer Science Interfaces Series, Kluwer Academic Publishers, Boston, Mass, USA, 2002. 11 C. A. C. Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary Algorithms for Solving Multi- Objective Problems, Kluwer Academic Publishers, New York, NY, USA, 2002. 12 A. E. Eiben and J. E. Smith, Introduction to Evolutionary Computing, Natural Computing Series, Springer, Berlin, Germany, 2003. 13 Z. Hua and F. Huang, “A variable-grouping based genetic algorithm for large-scale integer programming,” Information Sciences, vol. 176, no. 19, pp. 2869–2885, 2006. 14 M. Sakawa, K. Kato, H. Sunada, and T. Shibano, “Fuzzy programming for multiobjective 0-1 programming problems through revised genetic algorithms,” European Journal of Operational Research, vol. 97, pp. 149–158, 1997. 15 M. Sakawa, K. Kato, S. Ushiro, and K. Ooura, “Fuzzy programming for general multiobjective 0- 1 programming problems through genetic algorithms with double strings,” in Proceedings of IEEE International Fuzzy Systems Conference, vol. 3, pp. 1522–1527, 1999. 16 M. Sakawa, K. Kato, T. Shibano, and K. Hirose, “Fuzzy multiobjective integer programs through genetic algorithms using double string representation and information about solutions of continuous relaxation problems,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 967–972, 1999. 17 M. Sakawa and K. Kato, “Integer programming through genetic algorithms with double strings based on reference solution updating,” in Proceedings of IEEE International Conference on Industrial Electronics, Control and Instrumentation, pp. 2915–2920, 2000. 18 P. Hansen, “Quadratic zero-one programming by implicit enumeration,” in Numerical Methods for Non-Linear Optimization (Conf., Univ. Dundee, Dundee, 1971), F. A. Lootsma, Ed., pp. 265–278, Academic Press, London, UK, 1972. 19 J. Li, “A bound heuristic algorithm for solving reliability redundancy optimization,” Microelectronics and Reliability, vol. 36, pp. 335–339, 1996. 20 D. Li, J. Wang, and X. L. Sun, “Computing exact solution to nonlinear integer programming: convergent Lagrangian and objective level cut method,” Journal of Global Optimization, vol. 39, no. 1, pp. 127–154, 2007. Advances in Operations Research 17 21 R. H. Nickel, I. Mikolic-Torreira, and J. W. Tolle, “Computing aviation sparing policies: solving a large nonlinear integer program,” Computational Optimization and Applications, vol. 35, no. 1, pp. 109–126, 22 M. S. Sabbagh, “A partial enumeration algorithm for pure nonlinear integer programming,” Applied Mathematical Modelling, vol. 32, no. 12, pp. 2560–2569, 2008. 23 W. Zhu and H. Fan, “A discrete dynamic convexized method for nonlinear integer programming,” Journal of Computational and Applied Mathematics, vol. 223, no. 1, pp. 356–373, 2009. 24 M. Sakawa, K. Kato, M. A. K. Azad, and R. Watanabe, “A genetic algorithm with double string for nonlinear integer programming problems,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, pp. 3281–3286, 2005. 25 L. S. Lasdon, Optimization Theory for Large Systems, The Macmillian, New York, NY, USA, 1970. 26 K. Kato and M. Sakawa, “Genetic algorithms with decomposition procedures for multidimensional 0-1 knapsack problems with block angular structures,” IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 33, pp. 410–419, 2003. 27 K. M. Bretthauer, B. Shetty, and S. Syam, “A specially structured nonlinear integer resource allocation problem,” Naval Research Logistics, vol. 50, no. 7, pp. 770–792, 2003. 28 M. Sakawa, H. Yano, and T. Yumine, “An interactive fuzzy satisficing method for multiobjective linear-programming problems and its application,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 17, no. 4, pp. 654–661, 1987. 29 M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization, Applied Information Technology, Plenum Press, New York, NY, USA, 1993. 30 H.-J. Zimmermann, “Fuzzy programming and linear programming with several objective functions,” Fuzzy Sets and Systems, vol. 1, no. 1, pp. 45–55, 1978. Advances in Advances in Journal of Journal of Operations Research Decision Sciences Applied Mathematics Algebra Probability and Statistics Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 The Scientific International Journal of World Journal Die ff rential Equations Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Mathematical Physics Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 Journal of Journal of Abstract and Discrete Dynamics in Mathematical Problems Complex Analysis Mathematics in Engineering Applied Analysis Nature and Society Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 International Journal of Journal of Mathematics and Discrete Mathematics Mathematical Sciences Journal of International Journal of Journal of Function Spaces Stochastic Analysis Optimization Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014 Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal

Advances in Operations ResearchHindawi Publishing Corporation

Published: Aug 16, 2009

References