Artificial Intelligence Applications
Particle Swarm Optimization
Introduction
·
Swarm Intelligence (SI) is
an innovative distributed intelligent paradigm for solving optimization
problems that originally took its inspiration from the biological examples by
swarming, flocking.
·
Particle Swarm Optimization
(PSO) incorporates swarming behaviors observed in flocks of birds, schools of
fish, or swarms of bees, and even human social behavior, from which the idea is
emerged.
·
·
PSO is a population-based
optimization tool, which could be implemented and applied easily to solve
various function optimization problems, or the problems that can be transformed
to function optimization problems.
The flocking behavior of a group of birds
·
As an algorithm, the main
strength of PSO is its fast convergence, when compares it with many global optimization
algorithms like Genetic Algorithms (GA), and other global optimization
algorithms.
·
For applying PSO
successfully, one of the key issues is finding how to map the problem solution
into the PSO particle, which directly affects its feasibility and performance.
Canonical Particle Swarm Optimization
·
The canonical PSO model
consists of a swarm of particles, which are initialized with a population of
random candidate solutions.
·
They move iteratively
through the d-dimension problem space to search the new solutions, where the
fitness, f, can be calculated as the certain qualities measure.
·
Each particle has a
position represented by a position-vector
xi (i is the index of the
particle), and a velocity represented by a velocity-vector vi.
The best position-vector among the swarm so far is then stored in a
·
During the iteration time
t, the update of the velocity from the previous velocity to the new velocity is
determined by Eq.(1).
·
he new position is then
determined by the sum of the previous position and the new velocity by Eq.(2).
Eq.(1)
§
where
§
w is called as the inertia
factor,
§
r1 and r2 are the random
numbers, which are used to maintain the diversity of the population, and are
uniformly distributed in the interval [0,1] for the j-th dimension of the i-th
particle.
§
C1 is a positive constant,
called as coefficient of the self-recognition component,
§
c2 is a positive constant,
called as coefficient of the social component.
§
From Eq.(1), a particle
decides where to move next, considering its own experience, which is the memory
of its best past position, and the experience of its most successful particle
in the swarm.
§
The new position is then
determined by the sum of the previous position and the new velocity by Eq.(2).
Eq.(2)
The Parameters of PSO
§
The inertia weight w, in
Eq.(1), an initial value around 1.2 and gradually reducing towards 0 can be
considered as a good choice for w.
§
The parameters c1 and c2,
in Eq.(1), As default values, usually, c1 = c2 = 2 are used, but some
experiment results indicate that c1 = c2 = 1.49 might provide even better
results.
§
In the particle swarm
model, the particle searches the solutions in the problem space with a range
[−s, s] (If the range is not symmetrical, it can be translated to the
corresponding symmetrical range.)
§
In order to guide the
particles effectively in the search space, the maximum moving distance during
one iteration must be clamped (fixed) in between the maximum velocity [−vmax,
vmax] given in
Eq.(3)
The value of v max is p × s, with 0.1 ≤ p ≤ 1.0 and is usually chosen to be s, i.e. p = 1.
article Swarm Optimization Algorithm
The end criteria are usually one of the following:
§ Maximum number of iterations: the optimization
process is terminated after a fixed
number of iterations,
for example, 1000 iterations.
§ Number of iterations without improvement: the
optimization process is terminated after some fixed number of iterations
without any improvement.
§ Minimum objective function error: the error
between the obtained objective function value and the best fitness value is
less than a pre-fixed anticipated threshold.
By : mogtaba altyib
Modification by : Abdelhamid Salih Ph.D









تعليقات
إرسال تعليق