Sunday, April 26, 2015

Static Optimization problems solved by CI


Static Optimization problems solved by CI
 

The general nonlinear programming problem (NLP) is to find  x so as to  
There is no known method of determining the global maximum (or minimum) to the general nonlinear programming problem. Only if the objective function f(x) and the constraints c(x) satisfy certain properties, the global optimum can sometimes be found. Several algorithms were developed for unconstrained problems (e.g., direct search method, gradient method) and constrained problems (these algorithms usually are classified as indirect and direct methods. An indirect method attacks the problem by extracting one or more linear problems from the original one, whereas a direct method tries to determine successive search points. This is usually done by converting the original problem into unconstrained one for which gradient methods are applied with some modifications. Despite the active research and progress in global optimization in recent years, it is probably fair to say that no efficient solution procedure is in sight for the general nonlinear problems NLP.

 There are many other problems connected with traditional optimization techniques. For example, most proposed methods are local in scope, they depend on the existence of derivative, and they are insufficient by robust in discontinuous, vast multimodal , or noisy search spaces, It is important then to investigate other (heuristic) methods, which, for many real world problems, many prove very useful. 

(Ref: Ch. 7 Handling Constraints, pp.121-122, by Z. Michalewicz, Genetic Algorithms + Data Structures= Evolution Programs, Springer) 

Recently, A Springer 2013 book, entitled “Innovative Computational Intelligence: A Rough Guide to 134 Clever Algorithms” by Bo Xing and Wen-Jing Gao, introduces a number of new computational intelligence (CI) algorithms during the past decade. The mission of this book is really important since those algorithms are going to be a new revolution in computer science. They hope it will stimulate the readers to make novel contributions or to even start a new paradigm based on nature phenomena. This book introduces 134 innovative CI algorithms (Biology-based Algorithms, Physics-based Algorithms, Chemistry-based Algorithms, and Mathematics-based Algorithms). Our algorithm contribute it in their chapter 11. However, only a simple idea is introduced in this chapter. I hope that we can share more details in the future in this blog.

In the past, some readers asked me to share the whole source codes of FOA to help them to accomplish their works. But I indeed takes a lot of time and efforts to write our FOA manual, so readers have to wait. We also really hope that our source codes will not be sold in business behavior (not for academic purpose) without our permission.   

Many applications will be sent step by step later.

Sunday, February 15, 2015

Foundation of Fruit Fly Optimization Algorithm and its Applications






Abstract
 
In 2011, our group developed the Fruit Fly Optimization Algorithm (FFOA). Nowadays, a few papers referenced the FFOA, nevertheless, only a few economists and management experts are truly familiar with this method. In this paper, I am going to introduce its foundation and explain how it works. In addition, we will also explain how to apply our methods to different fields, especially on economic problems.
 
PS: The source code of this paper will be shown in this blogger if it could be published in the future.
 

 
 
Jing Si Aphorism:
 
The more we give, the more grounded we will feel.

 

Monday, February 2, 2015

How to Use FOA to Construct the Optimal Investment Strategies






This is the first paper, I think, talking about the optimal investment strategies by our FOA in the world. I can tell you how to do it if my papers of strategies could be published in the future.


Outputs:


 






Jing Si Aphorism:
 

While working, learn;

While learning, awaken

to the many truths of life
 


Soochow University EMA




 







A Rough Guide to our Fruit Fly Optimization Algorithm




 


Bo Xing and I describe the general knowledge of the foraging behavior of our FOA below.


1. Introduction


If you have been seeing small flies in your kitchen, they’re probably fruit flies. In fact, the fruit flies can be viewed as the second smallest member among the model animals in the narrow sense which has only hundreds of neurons and has no brain. During the summer, they are attracted to ripened or fermenting food through their sensing and perception characteristics, especially in osphresis and vision. Inspired by the behavior of real fruit flies, recently Pan (2012) proposed a new algorithm called fruit flies optimization algorithm (FFOA).

 

2. Fruit Fly Optimization Algorithm


The Fruit Fly Optimization Algorithm is an evolution algorithm for maximizing global optimization based on the food searching behavior of the fruit fly swarm. Fruit flies have better senses and perceptions than other species, particularly its olfactory and visual senses. The individual fruit fly samples the differing scents that are present in its surroundings, and then uses its sensitive vision to fly in the general direction of the target. It transmits information to the rest of the swarm and direct the ‘flocking location' of the swarm closer to the location of the food. Through ‘iterative evolution,” the fly swarm comes closer and closer to the food source, until the target is reached (See Figure 1). The fly's direction and distance of the original Pan's paper was based on two dimensions. However, in our article we extend his fly's search area more flexible and quantify the smell concentration of the food/target.

 


Fig. 1 Illustration of the group iterative food search of fruit flies


 The procedure of my 3D-FOA algorithm is shown as follows:


Step1: Generate initial fruit fly swarm location.

Init X_axis; Init Y_axis ; Init Z_axis
 

% p1=initial location parameter (default=20)
% X_axis= X_axis(p1)
% Y_axis= Y_axis(p1)
% Z_axis= Z_axis(p1)
 
X_axis=p1*(rand()-0.5);
Y_axis=p1*(rand()-0.5);
Z_axis=p1*(rand()-0.5);
 
Step 2: Calculate search direction, distance and smell concentration value of the fruit fly.
 
% p2= step parameter (default=2)
% X(i)= X(p1,p2);
% Y(i)= Y(p1,p2);
% Z(i)= Z(p1,p2);
% where i denotes the individual fly.
 
X(i)=X_axis+ p2*(rand()-0.5);
Y(i)=Y_axis+ p2 (rand()-0.5);
Z(i)=Z_axis+ p2 (rand()-0.5); 

The distance (D) of the food is defined from the origin and depends on parameters p1 and p2, due to the location of the food with respect to the fly swarm is unknown.
 
% p3=random interference parameter (2)
% D1=D1(p1,p2);
% D2=D2(p1,p2);
       
D(i,1)=(X(i)^2+Y(i)^2+Z(i)^2)^0.5;
D(i,2)=(X(i)^2+Y(i)^2+Z(i)^2)^0.5; 
The food’s smell concentration values (olfactory S or x) are calculated, which is mainly defined as the reciprocal of distance (D), and perturbed by a small fraction (gp) of it:

% gp1=gp1(p3);
% gp2=gp1(p3);
 
gp1=p3*(rand()-0.5);
gp2=p3*(rand()-0.5); 

Therefore, the variables (S1,S2) depend on three parameters (p1.p2, p3):
 
% S1(i)= S1(gp1, D1) = x1(i) (p1,p2,p3);
% S2(i)= S2(gp2, D2) = x2(i) (p1,p2,p3);
 
S1(i)=1/D(i,1)-gp1*D(i,1);
S2(i)=1/D(i,2)-gp2*D(i,2);
 
Substitute the smell concentration value S(i) into a Smell judgment function (a.k.a. objective function f(S)= Smell(i)), i.e. a function of D or S, (g(D) or f(S)), where D=(D(1),D(2)). Here we let S=( S1, S2) = (x1, x2) = x. 

% for example: two variables  
% Smell(i)= g(D(i,1),D(i,2))=f(S1(i),S2(i)) =f(x1(i),x2(i)). 
% as an example: the quadratic function of two variables

Smell(i)= =f(x1(i),x2(i))=(x1(i)+3)^2+(x2(i)+3)^2;
 
Step 3: Find the highest smell judgment value among the fruit flies.

[bestSmell bestindex]=min(Smell) 

Keep the highest smelling concentration value and the best 3-D location (X_axis, Y_axis , Z_axis), and at this moment, all other fruit flies use their visions to fly towards that location. In other words, the fruit fly smelling the highest concentration and would like to share its best location with all other flies.

X_axis=X(bestindex);
Y_axis=Y(bestindex);
Z_axis=Z(bestindex);
Smellbest=bestSmell;
 
Step 4: Update smell judgment value. If the maximal value or maximum number of iterations (maxgen) is reached, the processes are stopped. Otherwise, return to Step 5.

Step 5: Update the best location (X_axis, Y_axis, Z_axis) and return to Step 2.

 
The complete flowchart of FOA is shown in Figure 2.
 


Fig. 2 Flowchart of the fruit fly optimization algorithm
 
 

3. Conclusions 


Nowadays, a number of algorithmic approaches based on the animals’ foraging behavior were developed and applied to variety of combinatorial optimization problems. Among others, FFOA is a new member. Two characteristics of fruit flies (osphresis and vision) are the building blocks of FFOA. The main advantages of FFOA include simple computational process, ease understanding, and easy rapid spreading of FFOA: 


Interested readers please refer to our blog www.flyfoa.com as a starting point for a further exploration and exploitation of FFOA.

 
References  


1. Wei-Yuan Lin (2012), “3D-Novel Fruit Fly Optimization Algorithm and Its Applications in Economics”, working paper, Economics Department, Soochow University, Taiwan. 

2. Bo Xing and Wen-Jing Gao (2013),” Innovative Computational Intelligence: A Rough Guide to 134 Clever Algorithms”, Intelligent Systems Reference Library 62, DOI: 10.1007/978-3-319-03404-1_11, Springer, International Publishing Switzerland. 

3. Nien Benjamin (2011) Application of data mining and fruit fly optimization algorithm to construct financial crisis early warning model – A case study of listed companies in Taiwan, Master Thesis, Department of Economics, Soochow University, Taiwan (in chinese), Adviser: Wei-Yuan Lin.
 
4. Pan WT (2012) A new fruit fly optimization algorithm: Taking the financial distress model as an example Knowl Based Syst 26:69–74.
 

Jing Si Aphorism:

To give is better than to receive. 

 

   Soochow University EMA