Sunday, November 6, 2016

Leader and Follower (LF) Optimization Algorithm

Leader and Follower (LF) Optimization Algorithm One and a half years ago we had invented a new method called the Leader and Follower (LF) Optimization Algorithm. It extends the main idea of Fruit Fly Optimization Algorithm. LF can solve any kind of the problems including unconstrained or constrained examples. I will show them more detail in the future.

Saturday, November 5, 2016

Particle Swarm Algorithm (PSO) and Traditional Gradient Methods:



The basic flow of particle swarm algorithm (PSO) is as follows:

1. Initialize the particle swarm: that is, randomly set the initial position of each  particle and initial   velocity.

2. The new positions of the particles are generated from the initial position and the initial velocity.

3. Calculate the fitness value for each particle.

4. For each particle, compare its fitness value with the fitness value of the best position Pid  it has undergone and update the best position Pid .

5. For each particle, compare its fitness value with the fitness value of the best position Pgd the population has experienced. If it is better than Pgd , update the global optimal solution Pgd .

6. Speed-Displacement Model Operator: In the particle algorithm, the particle velocity and position are adjusted according to Equations (1) and (2) for each particle.

Vid(t+1)=ω*Vid(t)+eta1*rand() (Pid-Xid(t))+eta2*rand()(Pgd-Xid(t)) (1)
Xid(t+1)=Xid(t) + Vid(t+1)                                                                     (2)

Where Vid(t+1) denotes the velocity of the i-th particle in the d-dimension of the (t + 1)-th generation, and ω is the inertia weight , eta1 and eta2 are acceleration constants, and rand() is a random variable between 0 and 1. In addition, in order to make the particle velocity not excessively large, we can set speed upper limit Vmax.



7. The termination condition is satisfied (the set condition or the maximum number of iterations is reached), then stop, otherwise go to step 3 and proceed as shown in Fig 1.



Traditional gradient methods:


(1)   Unconstrained Example (Traditional gradient method: fminsearch)



Consider the problem of finding a set of values [ ] that solves



Minimize f(x)= exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) ;

    

We use fminsearch of the Matlab toolbox to solve this two-dimensional problem by the following two steps:



Step 1: Write on M-file fun.m:


function f=fun(x)

f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) ;


Step 2: invoke optimization routine:

x0=[-1,1]; % Starting guess

x= fminsearch(‘fun’, x0)


You can evaluate the solution below:


% Ans:

x =    0.5000   -1.0000

fval =   5.1425e-10

Elapsed time is 0.071092 second

%%%%%%%%%%%%%%

PSO for this Unconstrained Problem:


*************best PSO  ****************

zbest =    0.4959   -0.9961

fitnesszbest =   5.5972e-05

Elapsed time is 0.060479 seconds.













(2)   Constrained Example (Traditional gradient method: fmincon)


If inequality constraints are added, the resulting problem may be solve by the fmincon function. For example, if you want to find x that solves

Minimize f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) ;


subject to the following constraints:

 x(1)*x(2)-x(1)-x(2)+1.5 <=0  ;  -x(1)*x(2)-10 <=0 ; 

The original M-file is modified to return both the objective function and constraints.


We can also use fmincon of the Matlab toolbox to solve this constraint problem by the following two steps:



Step 1: Write on M-file fmincon.m for the objective function:



x0=[-1,1]; % Starting guess

fun='exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)';



Step 2: invoke constrainted optimization routine:

%% Program for two constraints:


function[c,ceq]=mycon(x)

 c=[x(1)*x(2)-x(1)-x(2)+1.5,-x(1)*x(2)-10];

ceq=[];



%%%%%%%%%%%%%%%

Main Program:

clear all

clc


tic



x0=[-1,1]; % Starting guess

fun='exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)';


x0=[-1,1]; % Starting guess

A=[];

b=[];

Aeq=[];

beq=[];

lb=[];

ub=[];


[x,fval]=fmincon('fun',x0,A,b,Aeq,beq,lb,ub,@mycon)


toc


% Ans:


% Local minimum found that satisfies the constraints.

% Optimization completed because the objective function is non-decreasing in

% feasible directions, to within the default value of the function tolerance,

% and constraints are satisfied to within the default value of the constraint tolerance.

% <stopping criteria details>



% x =   -9.5473    1.0474

% fval =    0.0236

% Elapsed time is 0.559910 seconds.

%%%%%%%%%%%%%%%%%%%%%%%%%

FL for Constrained Problem:






One and a half years ago we had invented a new method called the Leader and Follower (LF) Algorithm. It extends the main idea of Fruit Fly Optimization Algorithm. LF can solve any kind of  the problems including unconstrained or constrained  examples.


If readers wants to know much about this algorithm, Please refer to the next  page.




Output:










best =   -9.5038    1.0517
minSh =    0.0244