You are on page 1of 41

CHAPTER 1

1.1 Introduction
1.1.1 Chemical Reactors
In chemical engineering, chemical reactors are vessels designed to contain chemical
reactions. One example is a pressure reactor. The design of a chemical reactor deals with
multiple aspects of chemical engineering. Chemical engineers design reactors to maximize
net present value for the given reaction. Designers ensure that the reaction proceeds with the
highest efficiency towards the desired output product, producing the highest yield of product
while requiring the least amount of money to purchase and operate. Normal operating
expenses include energy input, energy removal, raw material costs, labour, etc. Energy
changes can occur in the form of heating or cooling, pumping to increase pressure, frictional
pressure loss (such as pressure drop across a 90 elbow or an orifice plate) or agitation

1.1.2 Types of Reactors

1.1.2.1 Tubular Reactor


The tubular reactor is a fixed-bed type resembling a vertical shell and tube heat exchanger.
The tubes are packed with catalyst pellets and boiling water is circulated on the shell side to
remove heat of exothermic reactions. In this simple model, it is assumed the gradients of
temperature and concentration between the phases can be ignored and the equations for the
two phases are combined. The general fluid-phase balance is a model with the balances
typically accounting for accumulation, convection, and reaction. In this model, axial
dispersion of heat has been neglected and the heat loss by a coolant has been considered as a
realistic reactor.

A tubular reactor is a vessel through which flow is continuous, usually at steady state, and
configured so that conversion of the chemicals and other dependent variables are functions of
position within the reactor rather than of time. In the ideal tubular reactor, the fluids flow as if
they were solid plugs or pistons, and reaction time is the same for all flowing material at any
given tube cross section. Tubular reactors resemble batch reactors in providing initially high
driving forces, which diminish as the reactions progress down the tubes. Figure 1.1 shows the
production of phallic anhydride using a tubular reactor in the presence of V 2 O 5 catalyst.

Flow in tubular reactors can be laminar, as with viscous fluids in small-diameter tubes, and
greatly deviate from ideal plug-flow behaviour, or turbulent, as with gases. Turbulent flow
generally is preferred to laminar flow, because mixing and heat transfer are improved. For
slow reactions and especially in small laboratory and pilot-plant reactors, establishing
turbulent flow can result in inconveniently long reactors or may require unacceptably high
feed rates

1
Figure 1.1 Production of Phallic Anhydride Using a Tubular Reactor in the Presence of
V 2 O 5 Catalyst.

Figure 1.2 Tubular Reactor

2
1.1.2.1(a) Description of Tubular Reactor
Figure 1.2 shows a tubular reactor. The above tubular reactor is used as an accessory to the
CE 310 Chemical Reactors Trainer to be able to study its behaviour in respect of the reaction
kinetics. The two liquid chemicals are pumped continuously through a plastic hose wound
into a coil to form the reaction tube where the chemicals react. The coiled tube is placed
inside a cylinder made of borosilicate glass that can be connected to the heating circuit on the
basic unit. All hose connections are self-sealing.

1.1.2.1(b) Specification of Tubular Reactor


1. Tubular reactor as accessory for CE 310 Chemical Reactors Trainer
2. Reaction tube formed by plastic hose, made of polyamide, wound into a coiled tube
3. Tank cylinder made of borosilicate glass
4. Hose connections for chemicals and hot water made using rapid action hose couplings

1.1.2.1(c) Technical Data of Tubular Reactor


Hose material: PA (polyamide)
Length of reaction section: 20m
Hose inside diameter: 5,5mm
Resulting reactor volume: approx. 0,475L
Dimensions and Weight
LxWxH: 470x250x600mm
Weight: approx. 25kg

1.1.2.2 SPHERICAL REACTOR

Although, tubular packed-bed reactors are used extensively in industry, due to some
disadvantages of this type of reactors the spherical packed-bed reactors attract more
attentions. Some potential disadvantages of tubular reactors are the pressure drop along the
reactor, high manufacturing costs and low production capacity. In order to avoid serious
pressure drop in the TR, the effective diameters of the catalyst particles are usually
considered more than 3 mm which lead to a certain inner mass transfer resistance. In this
study, the spherical reactor is proposed for naphtha reforming process.
In the spherical reactor, catalysts are situated between two perforated screens. As depicted in
figure below, the naphtha feed enters the top of the reactor and flows steadily to the bottom of
the reactor. Attempts should be made to have a continuous flow without any channelling in
the reactor. The goal is to achieve a uniform flow distribution through the catalytic bed,
because the flow is mainly occurring in an axial direction. The two screens in upper and
lower parts of the reactor hold the catalyst and act as a mechanical support. Since the cross-
sectional area is smaller near the inlet and the outlet of the reactor, the presence of catalysts in
these parts would cause substantial pressure drop and consequently, reduce the efficiency of
the spherical reactors. The other advantage of these screens is to balance the free zones (free
catalyst zones) to find a desirable pressure drop during the process. The radial flow is
assumed to be negligible in comparison with the axial flow. As a result, the equations in the
axial coordinate are being taken into account exclusively.

3
CHAPTER 2
2.1 Literature Review
2.1.1 Mathematical Modelling and optimization of DME synthesis in two stage spherical
reactor, Journal of Natural Gas and Engineering by Fereshteh Samimi, Mahdi Bayat , M.R
Rahimpour, Peyman Keshavarz.

They all developed a mathematical model for DME production rate. In order to maximize
DME mole fraction in the outlet of the reactors, the catalyst distributions and the inlet
temperatures of each reactor are optimized using differential evolution (DE) method. DME
stands for Dimethyl ether and is synthesized by dehydration of methanol. The reaction is
exothermic.

The goal of their study was reduction of the pressure drop and recompression costs as well as
the enhancement of the production rate of dimethyl ether (DME) synthesis reactor. For this
reason, a novel conguration of 2-stage spherical reactors is proposed .In this conguration,
the catalyst distribution in the conventional reactor is divided into two parts to load the
spherical reactors. The unreacted methanol from the rst reactor passes through a heat
exchanger to reach to a desire temperature and then enters top of the second reactor as the
inlet feed. In order to maximize DME production rate, the catalyst volume and the gas inlet
temperature of each reactor were optimized using differential evolution (DE) method. In
addition to signicantly lower pressure drop, DME production rate increases 16.3% in the
proposed conguration compared with CR.

2.1.2 Dynamic optimization of a multi-stage spherical, radial flow reactor for the naphtha
reforming process in the presence of catalyst deactivation using differential evolution (DE)
method, an article published in the Journal of hydrogen energy by M.R Rahimpour, Davood
Iranshashi.

They used differential evolution (DE) method to optimize the operating conditions of a radial
flow spherical reactor containing the naphtha reforming reactions. In this reactor
configuration, the space between the two concentric spheres is filled by catalyst. The dynamic
behaviour of the reactor has been taken into account in the optimization process. The
achieved mass and energy balance equations in the model are solved by orthogonal
collocation method. The goal of this optimization is to maximize the hydrogen and aromatic
production which leads to the maximum consumption of the paraffin and naphthenic. In order
to reach this end, the inlet temperature of the gas at the entrance of each reactor, the total
pressure of the process, as well as the catalyst distribution in each reactor have been
optimized using the differential evolution (DE) method. The results of the optimization of the
spherical reactor have been compared with the non-optimized spherical reactor. The
comparison shows acceptable enhancement in the performance of the reactor.

Decreasing the pressure drop in an industrial process is an important issue. Considering this
fact, utilizing a radial flow spherical reactor, which has a low pressure drop through the

4
catalytic bed, is a potentially interesting idea for industrial naphtha reforming. A one
dimensional model was used for optimization of the spherical reactor for catalytic naphtha
reforming process. Orthogonal collocation method was applied to solve the mass and energy
balance equations. The differential evolution (DE) method was used as the optimization
technique. The goal was to maximize the hydrogen and aromatic production rate. The
variability of the total molar flow rate was considered in this research which improved the
calculation results. The catalyst distribution in all three stages was optimized in a way to
maximize the production of the desired products. In this study, the maximum possible inlet
temperature of the gas at the entrance of the reactors was defined 777 K. However, by
utilizing higher thermal capacity furnaces and raising the inlet temperature of the gas entering
the reactors up to 840 K, more desirable results will be achieved. The effects of temperature
and time on the catalyst activity have been investigated in the results. Acceptable
enhancement in the performance of the reactor can be noticed. The results suggest that this
configuration can be a compelling way to boost hydrogen and aromatic production. However,
an investigation in relation to the environmental aspects, commercial viability and economic
feasibility of the proposed configuration is necessary in order to consider commercialization
of the process.

2.1.3 Dynamic Simulation and Optimization of a Dual-Type Methanol Reactor Using


Genetic Algorithms, Journal of Chemical Engineering and Technology by F. Askari,
M.R Rahimpour, A.Jahanmiri

The methanol production reactions are strongly exothermic and the catalyst is deactivated
over time. Therefore, the development of an auto-thermal two stage methanol reactor could
pave the way to increasing the methanol production in the methanol synthesis process. One
potentially interesting idea for industrial methanol synthesis is using an optimal auto-thermal
dual-type reactor. In this investigation, an auto-thermal dual-type methanol synthesis reactor
was modelled and optimized dynamically to maximize methanol production rate. The
optimization method used is based on genetic algorithms (GAs). Overall production
throughout 4 years of catalyst life was considered as optimization criterion to be maximized;
also, 3 variables which are length ratio of the reactors, feed and cooling water temperatures,
are tuned. Optimization includes two procedures. In the first approach, the ratio of reactor
lengths and temperature profile along the reactor were optimized. Useful results were
obtained from optimization, that is, yields optimal values for temperatures and reactor length
ratio. In the second approach, based on the results, optimization was followed by another task
that optimal behaviours to which feed and cooling water temperatures are concluded. The
results lead us to optimal operation policy yields of 4.7 % and 5.8 % in additional methanol
production during operating time for the first and second optimization approaches,
respectively. A comparison of calculated temperature profile of the catalyst along the lengths
of the reactors shows the extremely favourable temperature profile over the optimal auto-
thermal dual-type reactor system. The favourable temperature profile of the catalyst along the
reactor in the optimized reactor results in increased production rate in the system

The parameters affecting the production rate in an industrial methanol reactor are parameters
such as temperature and catalyst deactivation. In the case of reversible exothermic reactions
5
such as methanol synthesis, selection of a relatively low temperature permits higher
conversion but this must be balanced against a slower rate of reaction leading to a large
amount of catalyst. To the left of the point of maximum production rate, increasing
temperature improves the rate of reaction, which leads to more methanol production.

2.2 Conclusion of Literature Survey


In all the above mentioned papers the main objectives are optimization of the operating
conditions of a radial flow spherical reactor containing the naphtha reforming reactions,
maximization of DME mole fraction in the outlet of the reactors, reduction of the pressure
drop and recompression costs as well as the enhancement of the production rate of dimethyl
ether (DME).

So in the present study it is to optimize the heat transfer from the surface of a spherical
reactor using both genetic algorithm and SOS.

CHAPTER 3

6
3.1 Description of Genetic Algorithm (GA) and Symbiotic
Organisms Search (SOS)
The application of optimization algorithms to real world problems has gained momentum in
the last decade. Dating back to the early 1940s, diverse traditional mathematical methods
such as linear programming (LP), nonlinear programming (NLP) or dynamic programming
(DP) were first employed for solving complex optimization problems by resorting to different
relaxation methods of the underlying formulation. These techniques are capable of cost
efficiently obtaining a global optimal solution in problem models subject to certain
particularities but unfortunately their application range does not cover the whole class of NP
complete problems, where an exact solution cannot be found in polynomial time. In fact, the
solution space of the problem increases exponentially with the number of inputs, which
makes them unfeasible for practical applications.

3.1.1 Drawbacks of Traditional Optimization Techniques


Although, traditional mathematical programming techniques had been employed to solve
optimization problems in machining applications, these techniques have following
limitations:

Traditional techniques do not fare well over a broad spectrum of problem domains.
Traditional techniques are not suitable for solving multi-modal problems as they
tend to obtain a local optimal solution.
Traditional techniques are not ideal for solving multi-objective optimization
problems.
Traditional techniques are not suitable for solving problems involving large number
of constraints.
Considering the drawbacks of traditional optimization techniques, attempts are
being made to optimize the systems using evolutionary optimization techniques.

3.2 Evolutionary Optimization Techniques


As the history of the field suggests there are many different variants of evolutionary
techniques, the common underlying idea behind all these techniques is the same: given the
population of individuals the environmental pressure causes natural selection i.e., survival of
the fittest and this causes a rise in the fitness of the population. Given a quality problem to be
maximized it can randomly create a set of candidate solutions i.e. elements of the function
domain , and apply the quality function as an abstract fitness measure, the higher the better
based on this fitness , some of the better candidates are chosen to seed the further iterations.
These processes of iterations are continued until a candidate with sufficient quality is found
or a previously set computational time is reached.

In the past years, evolutionary multi-objective optimization (EMO) has become a popular and
useful eld of research and application. Evolutionary Optimization (EO) algorithms use a
population based approach in which more than one solution participates in an iteration and

7
evolves a new population of solutions in each iteration. The reasons for their popularity are
many:

(i) EOs do not require any derivative information


(ii) EOs are relatively simple to implement
(iii) EOs are exible and have a wide-spread applicability. For solving single-objective
optimization problems, particularly in nding a single optimal solution, the use of a
population of solutions may sound redundant, in solving multi-objective optimization
problems an EO procedure is a perfect choice. The multi-objective optimization
problems, by nature, give rise to a set of Pareto-optimal solutions which need a
further processing to arrive at a single preferred solution. To achieve the rst task, it
becomes quite a natural proposition to use an EO, because the use of population in an
iteration helps an EO to simultaneously nd multiple non-dominated solutions, which
portrays a trade-o among objectives, in a single simulation run.

3.2.1 Components of Evolutionary Optimization Techniques


Evolutionary techniques have a number of components, procedures or operators that must be
specified in order to define a particular evolutionary algorithm (EA). The most important
components are indicated below.

Representation (definition of individuals)

Evaluation function (or fitness function)

Population

Parent selection mechanism

Variation operators, recombination and mutation

Survivor selection mechanism (replacement)

Each of these components must be specified in order to define a particular evolutionary


algorithm. Furthermore, to obtain a running algorithm the initialization process and a
termination condition must also be provided.

3.2.1.1 Representation

The first step in defining an EA is to link the real world to the EA world that is to set up a
bridge between the original problem context and the problem solving space where evolution
will take place. Objects forming possible solutions within the original problem context are
referred to as phenotypes, their encoding, the individuals within the EA are called genotypes.
The first design step is commonly called representation, as it amounts to specifying a
mapping from the phenotypes onto a set of genotypes that are said to represent these
phenotypes. Given an optimization problem on integers, the given set of integers would form

8
the set of phenotypes. Then binary code can be represented as, hence 18 would be seen as a
phenotype and 10010 as a genotype representing it. It is important to understand that the
phenotype space can be very different from the genotype space, and that the whole
evolutionary search takes place in the genotype space. A solution a good phenotype is
obtained by decoding the best genotype after termination. To this end, it should hold that the
optimal solution to the problem at hand a phenotype is represented in the given genotype
space.

The common EC terminology uses many synonyms for naming the elements of two spaces.
On the side of original problem context, candidate solution, phenotype and individual are
used to denote points of the space of possible solutions this space itself is commonly called
the phenotype space. On the side of the EA, genotype, chromosome and again individual can
be used for points in the space where the evolutionary search will actually take place. This
space is often termed the genotype space also for the elements of the individuals there are
many synonyms terms. A place holder is commonly called a variable, a locus, a position, or in
a biology oriented terminology a gene. An object on such a place can be called a value or an
allele.

The word representation is used in two slightly different ways sometimes it stands for the
mapping from the phenotype to the genotype space in this sense this is synonyms with
encoding e.g., one could mention binary representation or binary encoding of candidate
solutions the inverse mapping from genotypes to phenotypes is usually called decoding and it
is required that the representation be invertible.. to each genotype there has to be at most one
corresponding phenotype the word representation can also be used in a slightly different
sense where the emphasis is not on the mapping itself but on the data structure of the
genotype space this interpretation is behind speaking about mutation operators for binary
representation.

3.2.1.2 Evaluation Function (Fitness Function)

The role of the evaluation function is to represent the requirements to adapt to. It forms the
basis for selection, and thereby it facilitates the improvements. More accurately it defines
what improvement means from the problem solving perspective, it represent the task to solve
in evolutionary context technically it is a function or procedure that assigns a quality measure
to genotypes. Typically this function is composed from a quality measure in the phenotype
space at the inverse representation.

The evaluation function is commonly called the fitness function in EA. this might cause a
counterintuitive terminology if the original problem requires minimization for fitness is
usually associated mathematically, however, it is trivial to change minimization into
maximization and vice versa. Quite often the original problem to be solved by an EA is an
optimization problem. In this case the name objective function is often used in the original
problem context and evolution fitness function can be identical to, or a simple
transformations of, the given objective function.

9
3.2.1.3 Population

The role of the population is to hold possible solutions. A population is a multi-set of


genotypes. The population forms the unit of evolution. Individuals are static objects not
changing or adapting, it is the population that does. Given a representation, defining a
population can be as simple as specifying how many individuals are in it, i.e., setting the
population of the size. In sophisticated EAs a population has an additional spacious structure,
with distance measure or a neighbourhood relation. In such cases the additional structure has
to be designed as well to fully specify the population. As opposed to variation operators that
act on the one or two parent individuals, the selection operators work at population level. In
general, they take the whole current population into account and choices are always made
relative to what we have. For instance, the best individual of the given population is chosen to
seed the next generation, or the worst individual of the given population is chosen to be
replaced by a new one. In almost all EA applications the population size is constant, not
changing during the evolutionary search.

The diversity of a population is a measure of no of different solutions present. No single


measure for diversity exists, typically people might refer to the no of different fitness values
present, the no of different phenotypes present, or the no of genotypes. Other statistical
measures such as entropy, or also note that only one fitness value does not necessarily imply
only one phenotype is present and in turn only one phenotype does not necessarily imply only
one genotype the reverse is however not true.

3.2.1.4 Parent Selection Mechanism

The role of parent selection or mating selection is to distinguish among individuals based on
their quality, in particular, to allow the better individuals to become parents of next
generation. An individual is a parent if has been selected to undergo variation in order to
create off springs together with the survival selection mechanism, parent selection is
responsible for pushing quality improvements. In EA parent selection is typically
probabilistic thus high quality individuals get a higher chance to become parents than those
with low quality. Nevertheless, low quality individuals are often given a small but positive
chance, otherwise the whole search could become too greedy and get struck in an optimum
which is local.

3.2.1.5 Variation Operators

The role of variation operators is to create new individuals from old ones. In the
corresponding phenotype space this amounts to generating new candidate solutions from the
generation and search perspective, variation operators perform the generate step. Variation
operators in EA are divided into two types based on their arty.

3.2.1.6 Mutation

A unary variation is commonly called mutation. It is applied to one genotype and delivers a
modified mutant, the child or offspring of it. A mutation operator is always stochastic: its
output depends on the outcomes of a series of random choices. It should be noted that an

10
arbitrary unary operator is not necessarily seen as mutation. A problem specific heuristic
operator acting on one individual could be termed as mutation for being unary. However, in
general mutation is supposed to cause a random, unbiased change. For this reason it might be
more appropriate not to call heuristic unary operators mutation. The role of mutation in EC is
different in various EC dialects, for instance in genetic programming it is often not used at
all, in genetic algorithms it has traditionally been seen as a background operator to fill the
gene pool with fresh blood, while in evolutionary programming it is the one and only
variation operator doing the whole search work. Generating a child amounts to stepping to a
new point in the space. From this perspective, mutation has a theoretical role to. It can
guarantee that the space is connected. This is important since thermo stating that an EA will
discover the global optimum of a given problem often relay on the property that each
genotype representing a possible solution can be reached by the variation operators. The
simplest way to satisfy this condition is to allow the mutation operator to jump everywhere.
However it should also be noted that many researchers fill this proofs have limited practical
applications, and many implementation of EAs do not in fact posses this property.

3.2.1.7 Survivor Selection Mechanism (Replacement)

The role of survivor selection or environmental selection is to distinguish among individuals


based on their quality. In that it is similar to parent selection, but it is used in a different stage
of evolutionary cycle. The survivor selection mechanism is called after having created the
offspring of the selected parents. As mentioned in above in EC the population size is constant.
Thus a choice has to be made on which individuals will be allowed in the next generation.
This decision is usually based on their fitness values favouring those with higher qualities.
Although the concept of age is also frequently used. As opposed to parents selection which is
critically stochastic. Survival selection is often deterministic, for instance ranking the unified
multi-set of parents and offspring and selecting the top segment (fitness biased) of selecting
only from the offspring (age biased)

Survival selection is also often called replacement or replacement strategy .in many
cases the two turns can be used interchangeably. The choice between the two is the often
arbitrary. A good reason use the name survival selection is to keep terminology consistent. A
preference for using the replacement can be motivated by the skewed proportion of the
number of individuals in the population and the number of newly created children. In
particular, if the number of children is very small with respect to the population size.

3.2.1.8 Initialization

Initialization is kept simple in most EA applications. The first population is seed by randomly
generated individuals. In principle problem specific heuristics can be used in this steps
aiming at an initial population with higher fitness. Whether this is worth the extra
computational effort or not is very much depending on the application at hand. There are
however some general observations concerning the issues based on the so called anytime
behaviour of EAs.

3.2.1.9 Termination Condition

11
As for a suitable Termination condition we can distinguish two cases. If the problem has a
known optimal fitness level, probably coming from a known optimum of the given objective
function, then reaching this level should be used as stopping condition. However, EAs are
stochastic and mostly there are no guarantees to reach an optimum, hence this condition may
never stop. This requires that this condition is extended with one that certainly stops the
algorithm. Commonly used options for this purpose are the following

1. The manually allowed CPU time elapses

2. The total number of fitness evaluations reaches a given limit;

3. For a given period of time the fitness improvement remains under a given threshold
value.

4. The population of diversity drops under a given threshold.

The actual termination criterion in such cases is a disjunction: optimum value hit or condition
r satisfied. If the problem does not have a known optimum, then it need no disjunction,
simply condition from the above list or a similar one that is guaranteed to stop the algorithm.

On the basis of the above context of evolutionary optimization techniques, two of these
evolutionary optimization techniques for solving a simple non- linear single objective
optimization function are discussed. They are:

Genetic algorithm (GA)


Symbiotic organisms search (SOS)

A brief description of the above optimization techniques is presented below:

3.3 Genetic Algorithm-An Overview


It turns out that there is no rigorous definition of "genetic algorithm" accepted by all in the
evolutionarycomputation community that differentiates GAs from other evolutionary
computation methods. However, it can be said that most methods called "GAs" have at least
the following elements in common: populations of chromosomes, selection according to
fitness, crossover to produce new offspring, and random mutation of new offspring.

The chromosomes in a GA population typically take the form of bit strings. Each locus in the
chromosome has two possible alleles: 0 and 1. Each chromosome can be thought of as a point
in the search space of candidate solutions. The GA processes populations of chromosomes,
successively replacing one such population with another. The GA most often requires a
fitness function that assigns a score (fitness) to each chromosome in the current population.
The fitness of a chromosome depends on how well that chromosome solves the problem at
hand.

3.3.1 GA Operators

12
The simplest form of genetic algorithm involves three types of operators: selection,
crossover (single point), and mutation.

3.3.1.2 Selection

This operator selects chromosomes in the population for reproduction. The fitter the
chromosome, the more times it is likely to be selected to reproduce.

3.3.1.3 Crossover

This operator randomly chooses a locus and exchanges the sub sequences before and after
that locus between two chromosomes to create two offspring. For example, the strings
10000100 and 11111111 could be crossed over after the third locus in each to produce the two
offspring 10011111 and 11100100. The crossover operator roughly mimics biological
recombination between two singlechromosome organisms.

3.3.1.4 Mutation

This operator randomly flips some of the bits in a chromosome. For example, the string
00000100 might be mutated in its second position to yield 01000100. Mutation can occur at
each bit position in a string with some probability, usually very small.

3.4 A Simple Genetic Algorithm


Given a clearly defined problem to be solved and a bit string representation for candidate
solutions, a simple GA works as follows:

1. Start with a randomly generated population of n lbit chromosomes. Table 3.1 shows the
randomly generated initial population.

Table 3.1 The Initial Randomly Generated Population

Chromosome Chromosome string Fitness


label
A 00000110 2
B 11101110 6
C 00100000 1
D 00110100 3
2. Calculate the fitness (x) of each chromosome x in the population.

3. Repeat the following steps until n offspring have been created:

a. Select a pair of parent chromosomes from the current population, the probability of
selection being an increasing function of fitness. Selection is done "with replacement,"
meaning that the same chromosome can be selected more than once to become a parent.
b. with probability pc (the "crossover probability" or "crossover rate"), cross over the pair
at a randomly chosen point (chosen with uniform probability) to form two offspring. If no
crossover takes place, form two offspring that are exact copies of their respective parents.
The crossover rate is defined to be the probability that two parents will cross over in a single

13
point. There are also "multipoint crossover" versions of the GA in which the crossover rate
for a pair of parents is the number of points at which a crossover takes place.

c. Mutate the two offspring at each locus with probability pm (the mutation probability or
mutation rate), and place the resulting chromosomes in the new population. Figure 3.3 shows
the reproduction process of initial population

Figure 3.3 Reproduction Process of Initial Population

If n is odd, one new population member can be discarded at random.

4. Replace the current population with the new population.

5. Go to step 2 as shown in figure 3.1

14
Figure 3.1 Flowchart of Genetic Algorithm

Each iteration of this process is called a generation. A GA is typically iterated for anywhere
from 50 to 500 or more generations. The entire set of generations is called a run. At the end of
a run there are often one or more highly fit chromosomes in the population. Since
randomness plays a large role in each run, two runs with different randomnumber seeds will
generally produce different detailed behaviours.

A common selection method in GAs is fitnessproportionate selection, in which the number


of times an individual is expected to reproduce is equal to its fitness divided by the average of
fitness in the population. A simple method of implementing fitnessproportionate selection is
"roulettewheel sampling", which is conceptually equivalent to giving each individual a slice
of a circular roulette wheel equal in area to the individual's fitness. The roulette wheel is
spun, the ball comes to rest on one wedgeshaped slice, and the corresponding individual is
selected. Once a pair of parents is selected, with probability pc they cross over to form two
offspring. If they do not cross over, then the offspring are exact copies of each parent. Next,
each offspring is subject to mutation at each locus with probability pm.

3.5 Genetic Operators


The third decision to make in implementing a genetic algorithm is what genetic operators to
use. This decision depends greatly on the encoding strategy. The most general operators used
are crossover, mutation and sometimes people may involve other techniques such as
inversion.

3.5.1 Crossover
The main distinguishing feature of a GA is the use of crossover. Singlepoint crossover is the
simplest form: a single crossover position is chosen at random and the parts of two parents
after the crossover position are exchanged to form two offspring. The idea here is, of course,
to recombine building blocks on different strings. Singlepoint crossover has some
shortcomings, though. For one thing, it cannot combine all possible schemas. For example, it
cannot in general combine instances of 11*****1 and ****11** to form an instance of
11**11*1. Likewise, schemas with long defining lengths are likely to be destroyed under
singlepoint crossover. The schemas that can be created or destroyed by a crossover depend
strongly on the location of the bits in the chromosome. Singlepoint crossover assumes that
short, loworder schemas are the functional building blocks of strings, but one generally does
not know in advance what ordering of bits will group functionally related bits togetherthis
was the purpose of the inversion operator and other adaptive operators described above.
Many people have also noted that single point crossover treats some loci preferentially: the
segments exchanged between the two parents always contain the endpoints of the strings. To
reduce positional bias and this "endpoint" effect, twopoint crossover can be used, in which

15
two positions are chosen at random and the segments between them are exchanged.
Twopoint crossover is less likely to disrupt schemas with large defining lengths and can
combine more schemas than singlepoint crossover. In addition, the segments that are
exchanged do not necessarily contain the endpoints of the strings. Again, there are schemas
that twopoint crossover cannot combine. GA practitioners have experimented with different
numbers of crossover points in one of the methods, the number of crossover points for each
pair of parents is chosen from a Poisson distribution whose mean is a function of the length
of the chromosome. Parameterized uniform crossover has no positional biasany schemas
contained at different positions in the parents can potentially be recombined in the offspring.
However, this lack of positional bias can prevent co adapted alleles from ever forming in the
population, since parameterized uniform crossover can be highly disruptive of any schema.
To choose a particular type of crossover there is no simple answer; the success or failure of a
particular crossover operator depends in complicated ways on the particular fitness function,
encoding, and other details of the GA. It is still a very important open problem to fully
understand these interactions. Again, it is hard to glean generalized conclusions for the usage
of particular type of crossover for a given situation. It is common in recent GA applications to
use either twopoint crossover or parameterized uniform crossover. For the most part, the
comments and references above deal with crossover in the context of bitstring encodings,
though some of them apply to other types of encodings as well. Some types of encodings
require specially defined crossover and mutation operators. Most of the comments above also
assume that crossover's ability to recombine highly fit schemas is the reason it should be
useful.

3.5.2 Mutation
A common view in the GA community, dating back to Holland's book Adaptation in Natural
and Artificial Systems, is that crossover is the major instrument of variation and innovation in
GAs, with mutation insuring the population against permanent fixation at any particular locus
and thus playing more of a background role. This differs from the traditional positions of
other evolutionary computation methods, such as evolutionary programming and early
versions of evolution strategies, in which random mutation is the only source of variation.
However, the appreciation of the role of mutation is growing as the GA community attempts
to understand how GAs solve complex problems. For solving a complex problem, it is not a
choice between crossover and mutation but rather the balance among crossover, mutation,
and selection that is all important. The correct balance also depends on details of the fitness
function and the encoding. Furthermore, crossover and mutation vary in relative usefulness
over the course of a run. Precisely how all this happens still needs to be elucidated. The most
promising prospect for producing the right balances over the course of a run is to find ways
for the GA to adapt its own mutation and crossover rates during a search.

3.6 Other Operators and Mating Strategies


Though most GA applications use only crossover and mutation, many other operators and
strategies for applying them have been explored in the GA literature. These include inversion
and gene doubling and several operators for preserving diversity in the population. Each

16
individual's fitness was decreased by the presence of other population members, where the
amount of decrease due to each other population member was an explicit increasing function
of the similarity between the two individuals. Thus, individuals that were similar to many
other individuals were punished, and individuals that were different were rewarded. Goldberg
and Richardson showed that in some cases this could induce appropriate "speciation,"
allowing the population members to converge on several peaks in the fitness landscape rather
than all converging to the same peak.

3.7 Step by Step Procedure of GE

Holland introduced the formalism of genetic algorithms (GAs) by analogy with how
biological evolution occurs in Nature. Deep down under, a computer program is nothing but a
string of 1s and 0s, something like 110101010110000101001001.. This is similar to how
chromosomes are laid out along the length of a DNA molecule. It can be thought of each
binary digit as a gene, and a string of such genes as a digital chromosome.

The essence of evolution is that, in a population, the fittest have a larger likelihood of
survival and propagation. Figure 3.2 shows the evolution environment of GA. In
computational terms, it amounts to maximizing some mathematical function representing
'fitness'.

17
Figure 3.2 Evolution Environment of GA
It is important to remember that whereas Darwinian evolution is an open-ended and blind,
process, GAs have a goal. GAs are meant to solve particular pre-conceived problems.
For solving a maximization problem, the steps involved are typically as follows:
1. The first step is to let the computer produce a population of, say, 1000 individuals, each
represented by a randomly generated digital chromosome.
2. The next step is to test the relative fitness of each individual (represented entirely by the
corresponding chromosome) regarding its effectiveness in maximizing the function under
consideration; e.g. the fitness function. A score is given for the fitness, say on a scale of 1 to
10. In biological terms, the fitness is a probabilistic measure of the reproductive success of
the individual. The higher the fitness, the greater is the chance that the individual will be
selected (by us) for the next cycle of reproduction.
3. Mutations are introduced occasionally in a digital chromosome by arbitrarily flipping a 1
to 0, or a 0 to 1.
4. The next step in the GA is to take (in a probabilistic manner) those individual digital
chromosomes that have high levels of fitness, and produce a new generation of individuals by
a process of reproduction or crossover. The GA chooses pairs of individuals.
5. The new generation of digital individuals produced is again subjected to the entire cycle of
gene expression, fitness testing, selection, mutation, and crossover.

6. These cycles are repeated a large number of times, till the desired optimization or
maximization problem has been solved.
The sexual crossover in reproductive biology, as also in the artificial GA, serves two
purposes. It provides a chance for the appearance of new individuals in the population which
may be fitter than any earlier individual. Secondly, a provides a mechanism for the existence
of clusters of genes which are particularly well-suited for occurring together because they
result in higher-than-average fitness for any individual possessing them.
As the population can shuffle its genetic material in every generation through sexual
reproduction, new building blocks, as well as new combinations of existing building blocks,
can arise. Thus the GA quickly creates individuals with an ever-increasing number of good
building blocks (the 'bad' building blocks get gradually eliminated by natural selection). If
there is a survival advantage to the population, the individuals that have the good building

18
blocks spread rapidly, and the GA converges to the solution rapidly (a case of positive
feedback).
In the presence of reproduction, crossover and mutation, almost any compact cluster of genes
that provides above-average fitness will grow in the population exponentially. Schema was
the term used by Holland for any specific pattern of genes.

3.8 Symbiotic Organisms Search


The proposed SOS algorithm simulates the interactive behaviour seen among organisms in
nature. Organisms rarely live in isolation due to reliance on other species for sustenance and
even survival. This reliance-based relationship is known as symbiosis. The following
subsection claries the meaning of symbiosis, gives examples of symbiotic relationship
archetypes, and describes the role of symbiosis in ecosystems.

3.8.1 The Basic Concept of Symbiosis


Symbiosis is derived from the Greek word for living together. Today, symbiosis is used to
describe a relationship between any two distinct species. Symbiotic relationships may be
either obligate, meaning the two organisms depend on each other for survival, or facultative,
meaning the two organisms choose to cohabitate in a mutually benecial but nonessential
relationship. The most common symbiotic relationships found in nature are mutualism,
commensalism, and parasitism. Mutualism denotes a symbiotic relationship between two
different species in which both benet. Commensalism is a symbiotic relationship between
two different species in which one benets and the other is unaffected or neutral. Parasitism
is a symbiotic relationship between two different species in which one benets and the other
is actively harmed. Generally speaking, organisms develop symbiotic relationships as a
strategy to adapt to changes in their environment. Symbiotic relationships may also help
organisms increase tness and survival advantage over the long-term. Therefore, it is
reasonable to conclude that symbiosis has built and continues to shape and sustain all modern
ecosystems.

19
Figure 3.5 Symbiotic Organisms Living Together in an Ecosystem

Figure 3.5 illustrates a group of symbiotic organisms living together in an ecosystem where
mutualism, commensalism, parasitism methods are used for sustenance.

3.8.2 The Symbiotic Organisms Search (SOS) Algorithm


Current meta-heuristic algorithms imitate natural phenomena. For example, Articial Bee
Colony (ABC) simulates the foraging behaviour of honeybee swarms, Particle Swarm
Optimization simulates animal ocking behaviour, and the Genetic Algorithm simulates the
process of natural evolution. SOS simulates the symbiotic interactions within a paired
organism relationship that are used to search for the ttest organism. The proposed algorithm
was developed initially to solve numerical optimization over a continuous search space.

Similar to other population-based algorithms, the proposed SOS iteratively uses a population
of candidate solutions to promising areas in the search space in the process of seeking the
optimal global solution. SOS begins with an initial population called the ecosystem. In the
initial ecosystem, a group of organisms is generated randomly to the search space. Each
organism represents one candidate solution to the corresponding problem. Each organism in
the ecosystem is associated with a certain tness value, which reects degree of adaptation to
the desired objective. Almost all meta-heuristic algorithms apply a succession of operations
to solutions in each iteration in order to generate new solutions for the next iteration. A
standard GA has two operators, namely crossover and mutation. Harmony Search proposes
three rules to improvise a new harmony: memory considering, pitch adjusting, and random
choosing. Three phases were introduced in the ABC algorithm to nd the best food source.
These were the employed bee, onlooker bee, and scout bee phases. In SOS, new solution
generation is governed by imitating the biological interaction between two organisms in the
ecosystem. Three phases that resemble the real-world biological interaction model are
introduced:

20
Mutualism phase
Commensalism phase
Parasitism phase

The character of the interaction denes the main principle of each phase. Interactions benet
both-sides in the mutualism phase; benet one side and do not impact the other in the
commensalism phase; benet one side and actively harm the other in the parasitism phase.
Each organism interacts with the other organism randomly through all phases. The process is
repeated until termination criteria are met. The following algorithm outline reects the above
explanation:

1 Initialization

2 Repeat

3 Mutualism phase

4 Commensalism phase

5 Parasitism phase

6 Until (termination is achieved)

3.8.2.1 Mutualism Phase


An example of mutualism, which benets both organism participants, is the relationship
between bees and owers. Bees y amongst owers, gathering nectar to turn into honey an
activity that benets bees. This activity also benets owers because bees distribute pollen in
the process, which facilitates pollination. This SOS phase mimics such mutualistic
relationships. In SOS, Xi is an organism matched to the ith member of the ecosystem.
Another organism Xj is then selected randomly from the ecosystem to interact with Xi. Both
organisms engage in a mutualistic relationship with the goal of increasing mutual survival
advantage in the ecosystem. New candidate solutions for Xi and Xj are calculated based on
the mutualistic symbiosis between organism Xi and X j, which is modelled in following
equations.

Xinew = Xi + random (0, 1)*(Xbest - mutual vector*BF1)

Xjnew = Xj + random (0, 1)*(Xbest - mutual vector*BF2)


The role of BF1 and BF2 is explained as follows. In nature, some mutualism relationships
might give a greater benecial advantage for just one organism than another organism. In
other words organism A might receive a huge benet when interacting with organism B.
Meanwhile, organism B might only get adequate or not so signicant benet when interacting
with organism A. Here, benet factors (BF1 and BF2) are determined randomly as either 1 or
2. These factors represent level of benet to each organisms, i.e., whether an organism
partially or fully benets from the interaction.

21
Mutual vector = Xi +Xj/2
Equation shows a vector called Mutual Vector that represents the relationship
characteristic between organism Xi and Xj. The part of equation (X best mutual vector*BF) is
reecting the mutualistic effort to achieve their goal in increasing their survival advantage.
According to the Darwins evolution theory, only the ttest organisms will prevail, all
creatures are forced to increase their degree of adaptation to their ecosystem. Some of them
use symbiotic relationship with others to increase their survival adaptation. The Xbest is
needed here because Xbest is representing the highest degree of adaptation. Therefore,
Xbest/global solution is used to model the highest degree of adaptation as the target point for
the tness increment of both organisms. Finally, organisms are updated only if their new
tness is better than their pre-interaction tness. Figure 3.6 shows the flowchart of mutualism
phase in SOS

22
Figure 3.6 Flowchart of Mutualism Phase in SOS

23
3.8.2.2 Commensalism Phase
An example of commensalism is the relationship between remora sh and sharks. The remora
attaches itself to the shark and eats food leftovers, thus receiving a benet. The shark is
unaffected by remora sh activities and receives minimal, if any, benet from the
relationship.

Similar to the mutualism phase, an organism, Xj, is selected randomly from the ecosystem to
interact with Xi. In this circumstance, organism Xi attempts to benet from the interaction.
However, organism Xj itself neither benets nor suffers from the relationship. The new
candidate solution of Xi is calculated according to the commensal symbiosis between
organism Xi and Xj, which is modelled in equation following the rules, organism Xi is
updated only if its new tness is better than its pre-interaction tness.

Xinew = Xi + random (-1, 1)*(Xbest Xj)


The part of equation, (Xbest Xj), is reecting as the benecial advantage provided by Xj to
help Xi increasing its survival advantage in ecosystem to the highest degree in current
organism (represented by Xbest). Figure 3.7 shows the flowchart of commensalism phase.

Figure 3.7 Flowchart of Commensalism Phase.

24
3.8.2.3 Parasitism Phase
An example of parasitism is the plasmodium parasite, which uses its relationship with the
anopheles mosquito to pass between human hosts. While the parasite thrives and reproduces
inside the human body, its human host suffers malaria and may die as a result. In SOS,
organism Xi is given a role similar to the anopheles mosquito through the creation of an
articial parasite called Parasite Vector. Parasite Vector is created in the search space by
duplicating organism Xi, then modifying the randomly selected dimensions using a random
number. Organism Xj is selected randomly from the ecosystem and serves as a host to the
parasite vector. Parasite Vector tries to replace Xj in the ecosystem. Both organisms are then
evaluated to measure their tness. If Parasite Vector has a better tness value, it will kill
organism Xj and assume its position in the ecosystem. If the tness value of Xj is better, Xj
will have immunity from the parasite and the Parasite Vector will no longer be able to live in
that ecosystem. Figure 3.8 shows the flowchart of parasitism phase

Figure 3.7 Flowchart of Commensalism Phase.

3.8.3 Some of the Major Characteristics of SOS Compared to


Other Techniques
25
SOS shares characteristics similar to most population-based algorithms (GA, DE,
PSO, MBA, CS, etc.) and performs specic operators iteratively on a group of
candidate solutions to achieve a global solution.
SOS does not reproduce or create children, unlike GA, DE, and numerous other
evolutionary algorithms.
SOS adapts through individual interactions. Although similar in this respect to PSO
and DE, the SOS strategy differs signicantly SOS uses three interaction strategies,
mutualism, commensalism, and parasitism, to gradually improve candidate solutions.
Mutualism is an SOS strategy not used in either PSO or DE. Mutualism modies
candidate solutions by calculating the difference between the best solution and the
average of two organisms (Mutual_ Vector). Mutual_ Vector often produces a unique
characteristic, especially when both organisms are located far from one another in the
search space. It an advantage in exploring new regions. Further, the two interacting
individuals are updated concurrently rather than singly or separately.
Commensalism is a strategy used by SOS as well as PSO, DE, and CS that alters a
solution by calculating the difference between other solutions. Commensalism in SOS
differs slightly from that in other algorithms because it uses the best solution as the
reference point to exploit promising regions near the best solution. This helps increase
search process convergence speed.
Parasitism is a mutation operator unique to SOS. The trial mutation vector (Parasite_
Vector) competes against other randomly selected individuals rather than its parent
or creator. The following three items highlight the advantages of parasitism:
Parasite_ Vector is created by changing the solution of the host in random
dimensions with a random number rather than by changing only a small part of the
solution. These selected dimensions may vary from one dimension to the entire
dimension set.
A small number of changed dimensions represents a local search characteristic,
while changes to whole dimensions can generate solutions that add a perturbation to
the ecosystem that maintains diversity and prevents premature convergence.
As a mutation operator, parasitism produces unique solutions that may be located in
completely different regions due to the highly random nature of this operator.
SOS uses greedy selection at the end of each phase to select whether to retain the old
or modied solution in the ecosystem. DE also uses this selection mechanism.
SOS uses only the two parameters of maximum evaluation number and population
size. Other algorithms such as GA, DE, PSO, MBA, and CS require the tuning of at
least one specic algorithm parameter in addition to these two parameters. While
simpler and more robust than competing algorithms, SOS is able to resolve a wide
variety of problems. Moreover, it avoids the risk of compromised performance due
to improper parameter tuning.

3.8.4 Conclusion of Symbiotic Organisms Search


A new meta-heuristic algorithm called Symbiotic Organisms Search (SOS) is inspired by the
biological interactions between organisms in an ecosystem. SOS simulated this natural
pattern using the three strategies of mutualism, commensalism, and parasitism. Its application

26
to sample problems demonstrated the ability of SOS to generate solutions at a quality
signicantly better than other meta-heuristic algorithms. Based on mathematical benchmark
function results, SOS precisely identied 22 of 26 benchmark function solutions, surpassing
the performance of GA, DE, BA, PSO, and PBA. SOS was also tested with four practical
structural design problems. The three phases of the SOS algorithm are simple to operate, with
only simple mathematical operations to code. Further, unlike competing algorithms, SOS
does not use tuning parameters, which enhances performance stability.

CHAPTER 4
4.1 RESULTS AND DISCUSSIONS
4.1.1 Solution by Genetic Algorithm

27
Consider the convective heat transfer from a spherical reactor of diameter D and temperature
T s to a fluid at a temperature T a , with a convective heat transfer coefficient h,
denoting ( T sT a ) as , h is given by

h 2+ {
0.5 0.2
D } ---------- 1

subject to a constraint D=20

It is to minimize the heat transfer from the sphere. Set up the objective function in terms of D
and subject to a single constraint. Employing genetic algorithm and SOS obtain the
optimum values of D and in order to minimize the heat transfer.

The equation for heat transfer from the surface area of a spherical reactor is given by
convection losses

Q=hA ------------ 2
Where

h 2+ {
0.5 0.2
D } and

surface area through which heat transfer occurs is A = 4 r 2= D2

Therefore by substituting h and tit reduces to,

Q= (2 D 2 +0.5D 0.2 )
20
Converting it to single variable problem by substituting = , it reduces to
D

Q=62.83(2D+0.91 D0.2 )

Objective function Y= 62.83(2D+0.91 D0.2 )

Boundary conditions0<D<6.3

String length = 6

4.1.1.1 Iteration 1
Table 4.1 First Iteration Values of Initial Population

28
D values INITIAL FITNESS COUNT MATING Values D
POPULATION VALUES POOL After values
Crossover
&
Mutation
0.3 000011 110.439 2 000011 000010 0.2
2.0 010100 301.094 1 000011 000100 0.4
3.0 011110 422.877 1 010100 010011 1.9
5.0 110010 669.739 0 011110 011111 3.3

Mating pairs are selected at random as (1, 4) and (2, 3) .crossover site for fist pair is selected
as 4 and for second pair is 3. Table 4.1 shows first iteration values of Initial population

4.1.1.2 Iteration2
Table 4.2 Second Iteration Values of Initial Population

D values Fitness Count Mating Values D values


values pool After
Crossover
&
Mutation
0.2 104.0184 2 000010 000100 0.4
0.4 118.9386 1 000010 000111 0.7
1.9 289.04 1 000100 000010 0.2
3.3 459.708 0 011111 011010 2.6

Mating pairs are selected at random as (1,3) and( 2,4) .crossover site for fist pair is selected as
3 and for second pair is 3. Table 4.2 shows second iteration values of Initial population

4.1.1.3 Iteration3
Table 4.3 Third Iteration Values of Initial Population

D values Fitness Count Mating pool Values After D values


values Crossover &
Mutation
0.4 118.936 1 000100 000010 0.2
0.7 149.365 1 000111 000110 0.6
1.9 104.0184 2 000010 000011 0
0.3 373.9455 0 000010 000100 0.8
Mating pairs are selected at random as (1,4) and( 2,3) .crossover site for fist pair is selected as
3 and for second pair is 4. Table 4.3 shows the third iteration values of initial population

29
4.1.1.4 Iteration 4
Table 4.4 Fourth Iteration Values of Initial Population

D values Fitness Count Mating pool Values After D values


values Crossover &
Mutation
0.2 104.0184 2 000010 000110 0.6
0.6 138.172 1 000010 000011 0.3
0.3 110.439 1 000110 000010 0.2
0.8 160.312 0 000011 000010 0.2

From the above iteration table 4.4 the best value of D is 0.2m and its fitness value is
104.0184kw

The iterations repeated until the fitness values of the last two iterations are almost same.

Therefore, by Genetic algorithm the optimum solution is

Minimum heat transfer through surface of spherical reactor is 104.01kw with diameter of
reactor being 0.1384m

4.2 Symbiotic Organism Search Method (SOS)


Consider the convective heat transfer from a spherical reactor of diameter D and temperature
T s to a fluid at a temperature T a , with a convective heat transfer coefficient h,
denoting ( T sT a ) as , h is given by

h 2+ {
0.5 0.2
D }
subject to constraint D=20

We wish to minimize the heat transfer from the sphere. Set up the objective function in terms
of D and subject to a single constraint. Employing genetic algorithm and SOS obtain the
optimum values of D and in order to minimize the heat transfer

The equation for heat transfer from the surface area of a spherical reactor is given convection
losses

Q=hA

Where

h 2+ { 0.5 0.2
D } and

30
surface area through which heat transfer occurs isA = 4 r 2= D2

Therefore by substituting h and A= D2 ,

Q= (2 D 2 +0.5D 0.2 )

20
Converting it to single variable problem by substituting = ,
D

Q=62.83(2D+0.91 D0.2 )

4.2.1 Solution by Symbiotic Organism Search

Iteration 1
Step1: Initialize eco system

[]
0.3
2
=
3
5

D limits 0 to 6.3

Step 2: finding out the fitness values

[ ]
110.43
301.09
Fitness values=
422.87
669.73

Step 3: X best =X 1=110.43

Step 4: i=0

Step 5: i=i+1

Step 6: i=0+1 > i=1 > X i=X 1

4.2.1.1 Mutualism phase


X j=X 3 Where ji

31
X i+ X j X 1+ X 3 0.3+3
Mutual vector= = = =1.65
2 2 2

X 1 new = X 1 + (rand (0,1)( X best MVBF))

= 0.3+(0.8516(0.3-1.65))

= 0.3-1.149

=-0.84

Due to issue of constraints it will be rounded off X 1 new =0.1= fitness=103.18

BF is the benefit factor which is taken randomly as 1

X 3 new = X 3 +(rand(0,1)( X best MVBF))

=3+(0.623(0.3-1.65))

=2.1549

And fitness value of X 3=319.22

[]
0.1
2
Updated ecosystem=
2.15
5

[ ]
103.18
301.09
And the fitness values are
319.22
669.73

4.2.1.2 Commensalism Phase


X j= Rand=4= X j= X 4

X inew =X i + (rand(-1,1)( X best X j ))

= o.1+(-0.0353)(0.1-5)

X inew =0.2729= fitness values=108.424

[ ]
0.2729
2
Updated ecosystem=
2.15
5

32
4.2.1.3 Parasitism Phase
Rand no=2, > X j= X 2

X 2 new = X 2 +(rand(0,1)( X best MVBF))

=2+(0.626(0.2729-1.136))

X 2 new = 1.461

[ ]
0.2729
1.461
Updated ecosystem= and the fitness values are
2.15
5

[ ]
108.424
236.4711
319.22
669.73

X best = 0.2729 > fitness value=108.424

4.2.2 Iteration2
I=1+1=2

X i= X 2 , X best =X 1

4.2.2.1 Mutualism Phase


Select X j=X 4 ,ji

X i+ X j 1.46+ 5
Mutual vector= = =3.23
2 2

X 2 new = X 2 +(rand(0,1)( X best MVBF))

=1.461+(0.8516(0.2729-3.23))

=-1.050.1

Fitness value of X 2 new =103.18

[ ]
0.2729
0.1
Updated ecosystem=
2.15
5

33
X 3 new = X 3 +(rand(0,1)( X best MVBF))

=5+(0.623(0.2729-3.23))

=3.155

[ ]
0.2729
0.1
Updated ecosystem= and the fitness value matrix is
2.15
3.15

[ ]
108.424
103.18
319.22
441.28

4.2.2.2 Commensalism Phase


X best =X 2 , Let X j=X 3

X 2 new = X 2 +(rand(0,1)( X best MVBF))

=0.1+(-0.0353(0.1-2.15))

=0.1723

[ ]
0.2729
0.1723
Updated ecosystem=
2.15
3.15

[ ]
108.424
102.925
Fitness value matrix=
319.22
441.28

X best = 0.1723

4.2.2.3 Parasitism Phase


Select X j randomly, let X j=X 3

X i+ X j X 2+ X 3 0.1723+ 2.15
Mutual vector= = = =1.16
2 2 2

34
X 3 new = X 3 +(rand(0,1)( X best MVBF))

=2.15+(0.623(0.1723-1.16))

=1.534

[ ]
0.2729
0.1723
Updated ecosystem=
1.534
3.15

[ ]
108.424
102.925
And the fitness matrix is
245.24
441.28

4.2.3 Iteration 3

4.2.3.1 Mutualism Phase


X best = 0.1723, X i= X 3 , X j= X 4

X 3 new = X 3 +(rand(0,1)( X best MVBF))

=1.534+(0.8516(0.1723-2.34))

=-0.310.1 > Fitness value=103.28

[ ]
0.2729
0.1723
Updated ecosystem=
0.1
3.15

X 4 new=X 4 +(rand(0,1)( X best MVBF))

=3.15+(0.6236(0.1723-2.34))

=1.79

[ ] [ ]
0.2729 108.424
0.1723 = 102.925
Updated ecosystem= Fitness value=
0.1 103.18
1.79 275.82

4.2.3.2 Commensalism Phase

35
X j=X 4 ,

X 3 new = X 3 +(rand(0,1)( X best MVBF))

=0.1+(-0.0353(0.1723-1.79))

=0.1571 Fitness value=102.53

[ ] [ ]
0.2729 108.424
0.1723 102.925
Updated Ecosystem= and the Fitness value=
0.15 102.53
1.79 275.82

4.2.3.3 Parasitism Phase


X best = 0.1571

Select X j randomly and X j = X4

X i+ X j X 4 + X 3 0.1571+1.79
Mutual vector= = = =0.97
2 2 2

X 4 new=X 4 +(rand(0,1)( X best MVBF))

=1.79+(0.6236(0.1571-0.97))

=1.283 fitness value=215.26

[ ] [ ]
0.2729 108.424
0.1723 102.925
Updated ecosystem= and the fitness matrix is
0.1571 102.53
1.79 215.26

X best = 0.1571 fitness value= 102.53

4.2.4 Iteration 4

4.2.4.1 Mutualism Phase


X best =X 3 , X i=X 4 ,

Select X j randomly X j= X 2

X i+ X j X 4 + X 3 0.1723+1.79
Mutual vector= = = =0.98
2 2 2

X 4 new=X 4 +(rand(0,1)( X best MVBF))

36
=1.79+(0.6236(0.1571-0.97))

=1.09 fitness value=193.16

X jnew= X 2 new =X 2 +(rand(0,1)( X best MVBF))

=0.1723+(0.6236(0.1571-0.98))

=-0.34 0.1 fitness value=103.18

[ ] [ ]
0.2729 108.424
0.1 103.18
Updated ecosystem= Updated fitness value=
0.1571 102.53
1.09 193.16

4.2.4.2 Commensalism Phase


Select randomly j=1 X j= X 1

X 4 new=X 4 +(rand(0,1)( X best MVBF))

=1.09+(-0.0353(0.1571-0.2729))

=1.09 4 1.09 fitness value=193.16

[ ] [ ]
0.2729 108.424
0.1 103.18
Updated ecosystem= Updated fitness value=
0.1571 102.53
1.09 193.16

4.2.4.3 Parasitism Phase


Let X j=X 2 =0.1

0.1+1.09
Mutual vector= =0.59
2

X 2 new = X 2 +(rand(0,1)( X best MVBF))

=0.1+(0.6236(0.1571-0.595))

=-0.176 0.1

X 2= 0.1 fitness value=103.18

Therefore optimum solution is

37
[ ] [ ]
0.2729 108.424
0.1 103.18
Updated ecosystem= Updated fitness value=
0.1571 102.53
1.09 193.16

By SOS method,

Optimum value of heat transfer is 102.53kw with diameter of spherical reactor being
0.1571m

Comparison of optimal values of SOS and Genetic Algorithm

SOS GENETIC ALGORITHM

D=0.1571m D= 0.1384m

Q=102.53KW Q=102.305KW

4.3 Advantages of Using GA and SOS


The primary advantage of using GA and SOS is that these methods provide a smooth
methodology when dealing with situations containing large number of parameters. In
these situations derivative based methods face difficulties as definition of gradient
cannot be defined.
These methods use fitness scores, which is obtained from objective functions, without
other derivative or auxiliary information. Hence any complex situation can be
optimized accordingly.
Evolutionary techniques always yield global minima or maxima values irrespective of
the objective function.
The techniques efficiently explore the new combinations with available knowledge to
find a new generation that suits the situation
Multi modal objective functions can also be efficiently solved using any of these
evolutionary techniques

38
CHAPTER 5
5.1 Conclusions
The following can be concluded

For a nuclear reactor whose dimensions of diameter range from 0 to 6.3m the optimal
heat removal rate was found to be 102.305KW at diameter of spherical reactor
0.138m using genetic algorithm and symbiotic organism search optimization using
same initial population.
The problem of finding the global optimum in a space with many local optima is a
classical problem for all systems that can adapt and learn. Both genetic algorithm and
symbiotic organisms search provide a comprehensive search methodology for
optimization. GA and SOS are applicable to both continuous and discrete optimization
problems. In global optimization scenarios these optimization techniques often
manifest their strengths; efficient parallelizable search; the ability to evolve solutions
with multiple objective criteria; and a characterizable and controllable process of
innovation.

The three phases of the symbiotic search algorithm namely, mutualism,


commensalism and parasitism are simple to operate, with only simple mathematical
operations to code. Further, unlike competing algorithms SOS does not use tuning
parameters which enhances performance stability hence concluding that SOS
algorithm while robust and easy to implement, is able to solve various numerical
optimization problems despite using fewer control parameters than competing
algorithms.

For solving any mathematical model using genetic algorithms it requires large number
of process parameters assumed by us compared to the symbiotic organism search
where only few are required. These process parameters are not constant for every
situation as they are randomly selected quantities and a proper selection procedure
(like utilizing mat lab soft wares etc.) must be used for these algorithms to direct us to
an optimal solution.
Both the methods provide a near approximate of the optimal solution of the given
problem and this value keeps on changing in very small quantities in subsequent

39
iterations and this continues for all further iterations and must be terminated on some
basis like computational time or rounding of the values of the solution.
Compared to genetic algorithms, symbiotic organisms search is a modern approach
and a more sophisticated one which has a powerful approach towards analysing
population in their bounds. Hence it can be concluded that the computational time for
symbiotic organisms search is little less compared to genetic algorithms. Both the
methods can be applied to any optimization problem with any number of variables.

5.2 Scope for Future Study


Some limitations of GAs are that in certain situations, they are overkill compared to
more straightforward optimization methods such as hill-climbing, feed forward
artificial neural networks using back propagation, and even simulated annealing and
deterministic global search.
To make genetic algorithm more effective and efficient it can be incorporated with
other techniques within its framework to produce a hybrid genetic algorithm that can
reap best from its combination.
Hybrid genetic algorithms have received significant interest in recent years and are
being increasingly used to solve real-world problems quickly, reliably and accurately
without the need for any forms of human intervention.
Future developments in both the above methods can be developing a generalized code
in any of the programming languages such that a direct output can be obtained once
the optimization function and constraints are mentioned as inputs.

40
CHAPTER 6
6.1 References
Design and optimization of thermal systems by Yogesh Jaluria
Neural networks, fuzzy logic and genetic algorithms by S. Raja Sekaran and G.A.
Vijayalakshmi Pai
Dynamic Simulation and Optimization of a Dual-Type Methanol Reactor Using
Genetic Algorithms, an article published in the journal Chemical Engineering and
Technology by F. Askari, M.R Rahimpour, A.Jahanmiri
Mathematical Modelling and optimization of DME synthesis in two stage spherical
reactor, an article published in the Journal of natural gas and engineering by
Fereshteh Samimi, Mahdi Bayat , M.R Rahimpour, Peyman Keshavarz.

Dynamic optimization of a multi-stage spherical, radial flow reactor for the naphtha
reforming process in the presence of catalyst deactivation using differential evolution
(DE) method, an article published in the Journal of hydrogen energy by M.R
Rahimpour, Davood Iranshashi.

41

You might also like