Professional Documents
Culture Documents
Introduction
Max-Min Ant System [1] is an optimization algorithm that was inspired by the
behavior of real ants. This evolutionary algorithm has become popular and has been
found to be effective for solving NP-hard combinatorial problems like travelling
salesman problem (TSP). However, as the city number grows, the algorithm often
takes a very long time to obtain the optimal solution. Efficient parallel Ant Colony
Optimization (ACO) [2] algorithms and implementation techniques are the key to
meet the scalability and performance requirements entailed in such cases. So far, there
are several parallel implementations of ACO algorithm [3,4]. In the PACS [3]
method, the artificial ants are firstly generated and separated into several groups, and
ACS is then applied to each group and the communication between groups is applied
according to some fixed cycles. [4] proposed two parallel strategies for the ant
system: the synchronous parallel algorithm and the partially asynchronous parallel
algorithm. And all of the above methods need the programmers to design and
implement the detailed parallelization on different processors.
Y. Tan, Y. Shi, and Z. Ji (Eds.): ICSI 2012, Part I, LNCS 7331, pp. 182189, 2012.
Springer-Verlag Berlin Heidelberg 2012
183
Preliminary Knowledge
2.1
MapReduce Overview
184
2.2
Each ant applies a state transition rule to choose a next city to visit;
Until all ants have built a complete solution
4
Pheromone updating rule is applied;
Until end condition is satisfied, usually reach a given iteration number
Max-Min Ant System [1] is one of the best implementation of ACO algorithm. It
combines an improved exploitation of the best solutions with an effective mechanism
for avoiding early search stagnation. It differs from Ant System (AS) mainly in the
following three aspects: (1) Only one single ant adds pheromone after each iteration;
(2) The range of possible pheromone trails on each solution component is limited to an
interval [ min , max ] ; (3) The initial pheromone trails are set to max .
185
In this section, we present the main design for MapReduce Max-Min Ant System
(MRMMAS). Firstly, we point out how MMAS be naturally adapted to MapReduce
programming model and present the general idea of MRMMAS. Then we explain
how the computations can be formalized as map and reduce operations in detail.
3.1
In an iteration of MMAS, each ant in the swarm locates at a starting node, chooses a
next city to visit step by step, and evaluates its solution. All of these actions are
completed independently of the rest of the swarm. As the analysis above, MRMMAS
algorithm needs one kind of MapReduce job. The map function performs the
procedure of constructing a solution for one ant and thus the map stage realizes the
solution construction for all the ants in a parallel way. Then, the reduce function
performs the procedure of updating the pheromone. For each round of the iteration,
such a job is carried out to implement the whole process of MMAS. The procedure of
MRMMAS is shown in the following.
Procedure: MapReduce MMAS for static combinatorial problems
186
6
7
The ant applies a state transition rule to choose a next city to visit until a
complete solution has been built. /* solution construction */
Calculate the fitness of the solution. /* solution evaluation */
/* Reduce stage */
Pheromone updating rule is realized by a reduce function;
Until end condition is satisfied, usually reach a given iteration number
Map Function: Firstly, the pheromone values, heuristic information, and all of the
parameters used in the state transition rule are transmitted into the map function from
the main function of the MapReduce job. The MRMMAS map function, shown as
function 1, is called once for each ant in the population. The input dataset is stored on
HDFS as a sequence file of <key, value> pairs, each of which represents a record in
the dataset. The number of the record is set as the number of the ant population. So
the map function would be carried out m times, where m is the population of the
ant swarm. The dataset is split and globally broadcasted to all mappers. Consequently,
the process of solution construction for the ants is parallel executed. For each map
task, one ant constructs one solution according to the state transition rule. Then, the
solution is evaluated and expressed as an output <key, value> pair.
Function 1: MRMMAS Map
def mapper(key, value):
/* get [n][n] , [n][n] , , from MapReduce job */
/* initialize tabuList */
for i=1 to n do
tabuList[i] = false;
/* randomly put the ant in a starting node */
currentPosition = randomInit(n);
solution[1] = currentPosition;
tabuList[currentPosition] = true;
/* construct the solution through state transition rule */
for i=2 to n do
/* calculate the visited probability of each city */
for (int j=0; j<city; j++) {
187
In the above procedure, the constructed solution and its fitness are outputted by a
<key, value> pair. All of the mappers have the same key, so all of the solutions will
be summed up together in the reduce step. And the information of different solution is
expressed in different vaule. Suppose a TSP solution is [1-2-3-4-5-6-7-1] and its path
length is 123.45, then vaule is a string 123.45+1,2,3,4,5,6,7.
Reduce Function: The input of the reduce function is the intermediate <key, value>
pairs obtained from the map function of each host. As described in the map function,
each pair includes a solution and its fitness. In the reduce function, we can sum up all
the solutions constructed in the map step and obtain the best solution in the iteration
and the best solution from the beginning. Then we can update the pheromone
according to the pheromone updating rule in MMAS. These results are outputted by a
<key, value> pair and will be transmitted to all the mappers in the following iteration.
The pseudo code for MRMMAS reduce function is shown in function 2.
Function 2: MRMMAS Reduce
def reducer(key, value_list):
/* get [n][n] , and global best solution gBest from MapReduce job */
/* Of all of the solutions, find the best record in the current iteration */
for value in value_list:
fitness = getFitness(value); solution = getSolution(value);
if (iBest is null) or (fitness > iBest)
iBest = fitness; iBestSolution = solution;
/* update the global best solution */
if (iBest > gBest) { gBest = iBest;}
/* pheromone updating */
= (1 ) * ;
for all edges(i,j) in iBestSolution
ij = ij + ijbest ;
/* range pheromone into [ min , max ] */
for all edges(i,j)
if ( ij > max ) { ij = max ;}
if ( ij < min )
{ ij = min ;}
value> pair;
output <key
188
Experiments
189
Conclusions
Although ACO algorithm has successfully been applied to solve many problems, its
long running time is always an issue when dealing with large scale problems. This
paper presents a parallel MMAS algorithm based on MapReduce, which will be
widely embraced by both academia and industry. In our implementation, the process
of solution construction will be carried on in different processors. The MapReduce
system can balance the load dynamically and automatically. We have presented that
MMAS can be naturally adapted to the MapReduce programming model and the
experimental results show that it scales well through the computer cluster.
Acknowledgments. Supported by the National Natural Science Foundation of China
(No. 60933004, 60975039, 61175052, 61035003, 61072085), National High-tech
R&D Program of China (863 Program) (No.2012AA011003).
References
1. Sttzle, T., Hoos, H.: MAX-MIN ant system. Future Generation Computer System 16(8),
889914 (2000)
2. Dorigo, M., Sttzle, T.: Ant Colony Optimization. The MIT Press, America (2004)
3. Chu, S.-C., Roddick, J., Pan, J.-S., Su, C.-J.: Parallel Ant Colony Systems. In: Zhong, N.,
Ra, Z.W., Tsumoto, S., Suzuki, E. (eds.) ISMIS 2003. LNCS (LNAI), vol. 2871, pp. 279
284. Springer, Heidelberg (2003)
4. Bernd, B., Gabriel, E.K., Christine, S.: Parallel Strategies for the Ant System. University of
Vienna, Vienna (1997)
5. Jeffrey, D., Sanjay, G.: MapReduce: Simplified Data Processing on Large Clusters.
Communications of the ACM 51, 107113 (2008)
6. Ralf, L.: Googles MapReduce Programming Model Revisited. Science of Computer
Programming 70, 130 (2008)
7. Grama, A., Gupta, A., Karypis, G., Kumar, V.: Introduction to Parallel Computing, 2nd edn.
Addison-Wesley, Harlow (2003)