Friday , April 19 2024

Artificial Bee Colony (ABC) Algorithm for Constrained Optimization Improved with Genetic Operators

Nebojsa Bacanin
Megatrend University Belgrade, Faculty of Computer Science
Bul. Umetnosti 29, 11070 N. Belgrade, Serbia

Milan Tuba
Megatrend University Belgrade, Faculty of Computer Science
Bul. Umetnosti 29, 11070 N. Belgrade, Serbia

Abstract: Artificial bee colony (ABC) is a relatively new swarm intelligence based metaheuristic. It was successfully applied to unconstrained optimization problems and later it was adjusted for constrained problems as well. In this paper we introduce modifications to the ABC algorithm for constrained optimization problems that improve performance of the algorithm. Modifications are based on genetic algorithm (GA) operators and are applied to the creation of new candidate solutions. We implemented our modified algorithm and tested it on 13 standard benchmark functions. The results were compared to the results of the latest (2011) Karaboga and Akay’s ABC algorithm and other state-of-the-art algorithms where our modified algorithm showed improved performance considering best solutions and even more considering mean solutions.

Keywords: Artificial bee colony (ABC), Constrained optimization, Swarm intelligence, Nature inspired metaheuristics.

>>Full text
CITE THIS PAPER AS:
Nebojsa BACANIN, Milan TUBA, Artificial Bee Colony (ABC) Algorithm for Constrained Optimization Improved with Genetic Operators, Studies in Informatics and Control, ISSN 1220-1766, vol. 21 (2), pp. 137-146, 2012. https://doi.org/10.24846/v21i2y201203

1. Introduction

Optimization Problems

Optimization is one of the most applicable areas in mathematics and computer science since most real-life problems can be described as some kind of optimization problem. The types of mathematical relationships between the objective function, potential constraints and decision variables determine how difficult the particular problem is. Hard optimization problems can be combinatorial (discrete) or continuous (global optimization), where continuous problems can be constrained or unconstrained (bound constrained).

The nonlinear constrained optimization problem in the continuous space can be formulated as follows:

a03f01 (1)

where a03f02a03f03 S. The search space S is an n-dimensional hyper-rectangular space in Rn defined by lower and upper bounds for variables:

a03f04 (2)

and the feasible region a03f03 S is defined by a set of m linear or nonlinear constraints:

a03f05 (3)

where q is the number of inequality constraints and m-q is the number of equality constraints.

Most of the optimization algorithms start with random, unfeasible solutions in the initialization phase with expectation that after some number of iterations these solutions will reach the feasible area. However, equality constraints pose a difficult issue since their presence makes the feasible space very small compared to the entire search space. The equality constraints can be replaced by inequality constraints with some small violation limit a03f06 > 0 [1]:

a03f07 (4)

The quality of results depends on the choice of the violation limit value > a03f06. If it is selected too small, the algorithm may not find the feasible solutions, and if the tolerance > a03f06 is too large the results may be far from the feasible region.

The promising approaches for handling equality constraints include dynamic, self-adaptive tolerance adjustment [2]. The process should start with a large violation value > a03f06, which is gradually decreased through the cycles of the algorithm. A dynamic setting of the violation value > a03f06 can be defined as follows:

a03f08 (5)

where t is the current cycle and dec > 1 is the decreasing rate value of each cycle.

Metaheuristics

Many metaheuristic algorithms have been developed recently for solving optimization problems from both domains, numerical and combinatorial. They include population based, iterative based, stochastic, deterministic and other approaches.

One of the oldest metaheuristics for the global optimization problem is simulated annealing (SA) which is recognized as a generic probabilistic method. SA can be applied in many practical industrial problems, such as part type selection and operation allocation problem in flexible manufacturing system (FMS) [3].

Population based algorithms which are working with a set of solutions and iteratively trying to improve them were very successful recently. They can be divided into two groups: evolutionary algorithms (EA) and swarm intelligence.

Prominent among EA are genetic algorithms (GA). GA and other EA have been applied to a wide variety of different problems [4], [5].

Swarm intelligence based on the collective behavior of the social insect colonies and other animal societies has recently become an important research topic. The key concept of swarm intelligence lies in a simple set of rules that control each of the individuals which exhibit remarkable collective intelligence. The swarming concept can also be extended to human group decision process [6].

Particle swarm optimization (PSO) is a swarm intelligence algorithm which simulates social behavior of fish schooling or bird flocking. There are also other PSO approaches like interactive particle-swarm metaheuristic used for multi-objective optimization (MOO) [7].

Ant colony optimization (ACO) is a technique that is quite successful in solving many hard practical optimization problems. The foundation of the ACO is foraging behavior of real ants which are able to find the shortest paths between their nests and food sources due to the substance called pheromone. ACO has been applied to the minimum weight vertex cover problem [8], power distribution problems [9], and many others.

Several metaheuristics have been developed to simulate the specific intelligent behavior of honey bee swarms. Bee colony is a highly dynamical system which collects information from its surrounding and adopts its behavior accordingly. Artificial bee colony (ABC) algorithm is one of the latest representatives of the honey bee swarm algorithms. Originally, the ABC algorithm was proposed by Karaboga for finding global optimum over continuous space [10].

ABC was successfully applied to unconstrained [11] as well as to constrained function optimization problems [12]. Testing results show that the performance of ABC algorithm is comparable to other state-of-the-art algorithms for high dimensionality optimization [13]. ABC has recently become very active research area and many modifications [14] and enhancements [15] of the original algorithm were introduced.

Our Improvement

The ultimate goal of any metaheuristic algorithm is to find the optimal feasible solution. To achieve this goal, appropriate balance between exploitation and exploration is required at each iteration of the algorithm. By studying the ABC algorithm, we noticed a deficiency during the solution search process. After significant number of iterations, when optimal solution is almost found, scout bees which perform the exploration process are not useful any more, just the opposite. This problem can be treated by better adjustment of exploration and exploitation balance [16], [5].

In order to improve the exploitation process at later stages of the algorithm, we adopted uniform crossover and mutation operators from GA during the replacement process of the exhausted food sources. We have found an appropriate empirical point where some scout bees (according to an appropriate probability parameter) are transformed into a new class of guided onlookers for strong exploitation. This new mechanism of replacing exhausted food sources performs intensive exploitation around current best solution using uniform crossover operator. After crossover, mutation operator takes place. Each function parameter is mutated with small probability thus preventing any variable to keep fixed value indefinitely.

In such a manner, by integrating GA with the ABC, we derived a modified ABC algorithm for constrained optimization improved with genetic operators and named it genetically inspired ABC algorithm (GI-ABC).

The rest of this paper is organized as follows. Section 2 explains the original ABC algorithm; Section 3 describes the principle of the crossover and mutation modifications and adjustment for the GI-ABC algorithm. In Section 4 an analysis of trade-offs between exploration and exploitation is performed first, using various parameter sets. After that, series of comparison experiments on the set of 13 well known g benchmark functions are performed to verify the effectiveness of our proposed approach over the latest Karaboga and Akay’s [14] ABC algorithm and other state-of-the-art algorithms [17], [18], [19].

References:

  1. MEZURA-MONTES, E., Constraint-Handling in Evolutionary Optimization, Studies in Computational Intelligence, vol. 198, Springer-Verlag, 2009, p. 264.
  2. MEZURA-MONTES, E., COELLO COELLO, A. C., Constraint-Handling in Nature-Inspired Numerical Optimization: Past, Present and Future, Swarm and Evolutionary Computation, vol. 1(4) 2011, pp. 173-194.
  3. TIWARI, M. K., S. KUMAR, S. PRAKASH, Solving Part-type Selection and Operation Allocation Problems in an FMS: An Approach using Constraints-based fast Simulated Annealing Algorithm, IEEE Trans. on Systems, Man and Cybernetics Part A – Systems and Humans, vol. 36(6), 2006, pp. 1170-1184.
  4. NICOARA, E. S., F. G. FILIP, N. PARASCHIV, Simulation-based Optimization Using Genetic Algorithms for Multi-objective Flexible JSSP, Studies in Informatics and Control, vol. 20(4), 2011, pp. 333-344.
  5. GZARA, M., A.ESSABRI, Balanced Explore-Exploit Clustering based Distributed Evolutionary Algorithm for Multi-objective Optimization, Studies in Informatics and Control, vol. 20(2), 2011, pp. 97-106.
  6. ZAMFIRESCU, C. B., F. G. FILIP, Swarming Models for Facilitating Collaborative Decisions, International Journal of Computers, Communications & Control, vol. V(1), 2011, pp. 125-137.
  7. AGRAWAL, S., Y. DASHORA, M. K. TIWARI, Interactive Particle Swarm: A Pareto-Adaptive Metaheuristic to Multiobjective Optimization, IEEE Trans. on Systems, Man and Cybernetics Part A – Systems and Humans vol. 38(2), 2008, pp. 258-277.
  8. JOVANOVIC, R., M. TUBA, An Ant Colony Optimization Algorithm with Improved Pheromone Correction Strategy for the Minimum Weight Vertex Cover Problem, Applied Soft Computing, vol. 11(8), 2011, pp. 5360-5366.
  9. SECUI, C. D., S. DZITAC, G. V. BANDEA, An ACO Algorithm for Optimal Capacitor Banks Placement in Power Distribution Networks, Studies in Informatics and Control, vol. 18(4), 2009, pp. 305-314.
  10. KARABOGA, D., An Idea Based on Honey Bee Swarm for Numerical Optimization, Technical Report TR06, Computer Engineering, Department, Erciyes University, Turkey, 2005.
  11. KARABOGA, D., B. BASTURK, A Powerful and Efficient Algorithm for Numerical Function Optimization: Artificial Bee Colony (ABC) Algorithm. Journal of Global Optimization, vol. 39(3), 2007, pp. 459-471.
  12. KARABOGA, D., B. BASTURK, Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems, Proc. IFSA 2007, LNAI 4529, 2007, pp.789-798.
  13. KARABOGA, D., BASTURK, B., On the Performance of Artificial Bee Colony (ABC) Algorithm, Applied Soft Computing, vol. 8 (1), 2007, pp. 687-697.
  14. KARABOGA, D., B. AKAY, A Modified ABC for Constrained Optimization Problems, Applied soft computing, vol. 11(3), 2011, pp. 3021-3031.
  15. BRAJEVIC, I., M. TUBA, An Upgraded Artificial Bee Colony Algorithm (ABC) for Constrained Optimization Problems, Journal of Intelligent Manufacturing, 2012, available Springer Online First, DOI: 10.1007/s10845-011-0621-6, p. 12.
  16. ZHU, G., S. KWONG,, Gbest-guided Artificial Bee Colony Algorithm for Numerical Function Optimization, Applied mathematics and computation, vol. 217(7), 2010, pp. 3166-3173.
  17. MEZURA-MONTES, E., M. DAMIAN-ARAOZ, O. CETINA-DOMINGEZ, Smart Flight and Dynamic Tolerances in the Artificial Bee Colony for Constrained Optimization, Proc. IEEE Congress on Evolutionary Computation (CEC’2010), 2010, pp. 1-8.
  18. TESSEMA, B., G. G. YEN, A Self-adaptive Penalty Function based Algorithm for Constrained Optimization, IEEE Cong. on Evolutionary Computation 2006 (CEC’2006), 2006, pp. 950-957.
  19. WANG, Y., Z. CAI, Y. ZHOU, W. ZENG, An Adaptive Tradeoff Model for Constrained Evolutionary Optimization, IEEE Trans. on Evolutionary Computation, vol. 12(1), 2008, pp. 80-92.
  20. DEB, K., An Efficient Constraint-handling Method for Genetic Algorithms, Computer Methods in Applied Mechanics and Engineering, vol. 186(2-4), 2000, pp. 311-338.
  21. LIANG, J. J., T. P. RUNARSSON, E. MEZURA-MONTES, M. CLERK, P. N. SUGANTHAN, A. C.COELLO COELLO, K. DEB, Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-parameter Optimization, Technical Report, 2006.
  22. KOZIEL, S., Z. MICHALEWICZ, Evolutionary Algorithms, Homomorphous Mappings, and Constrained Parameter Optimization, Evolutionary Computation, vol. 7, issue 1, 1999, pp. 19-44.
  23. RUNARSSON, T. P., X. YAO, Stochastic Ranking for Constrained Evolutionary Optimization, IEEE Trans. on Evolutionary Computation, vol. 4(3), 2000, pp. 284-294.
  24. RUNARSSON, T. P., X. YAO, Search Biases in Constrained Evolutionary Optimization, IEEE Trans. on Systems, Man and Cybernetics, vol. 35(2), 2005, pp.233-243.
  25. HAMIDA, S. B., M. SCHOENAUER, ASCHEA: New Results using Adaptive Segregational Constraint Handling, Proc. of the 2002 Congress on Evolutionary Computation, 2002, pp. 884-889.
  26. MEZURA-MONTES, E., A. C. COELLO COELLO, A Simple Multimembered Evolution Strategy to Solve Constrained Optimization Problems, IEEE Trans. on Evolutionary Computation, vol. 9(1), 2005, pp. 1-17.
  27. ZAVALA, A. E. M., A. H. AGUIRRE, E. R. V. DIHARCE, Constrained Optimization via Particle Evolutionary Swarm Algorithm (PESO) , Proc. of the Conf. on Genetic and Evolutionary Computation GECCO 2005, 2005pp. 209-216.