
Generalized Regression Networks
A generalized regression neural network (GRNN) is often used for function approximation. It has a radial basis layer and a special linear layer.
Network Architecture
The architecture for the GRNN is shown below. It is similar to the radial basis network, but has a slightly different second layer. Here the nprod box shown above (code function normprod) produces S2 elements in vector n2. Each element is the dot product of a row of LW2,1 and the input vector a1, all normalized by the sum of the elements of a1. For instance, suppose thatLW{2,1}= [1 -2;3 4;5 6];
a{1} = [0.7;0.3];
Then
aout = normprod(LW{2,1},a{1})
aout =
0.1000
3.3000
5.3000
The first layer is just like that for newrbe networks. It has as many neurons as there are input/ target vectors in P. Specifically, the first-layer weights are set to P'. The bias b1 is set to a column vector of 0.8326/SPREAD. The user chooses SPREAD, the distance an input vector must be from a neuron's weight vector to be 0.5.
Again, the first layer operates just like the newbe radial basis layer described previously. Each neuron's weighted input is the distance between the input vector and its weight vector, calculated with dist. Each neuron's net input is the product of its weighted input with its bias, calculated with netprod. Each neuron's output is its net input passed through radbas. If a neuron's weight vector is equal to the input vector (transposed), its weighted input will be 0, its net input will be 0, and its output will be 1. If a neuron's weight vector is a distance of spread from the input vector, its weighted input will be spread, and its net input will be sqrt(-log(.5)) (or 0.8326). Therefore its output will be 0.5.
The second layer also has as many neurons as input/target vectors, but here LW{2,1} is set to T.
Suppose you have an input vector p close to pi, one of the input vectors among the input vector/target pairs used in designing layer 1 weights. This input p produces a layer 1 ai output close to 1. This leads to a layer 2 output close to ti, one of the targets used to form layer 2 weights.
A larger spread leads to a large area around the input vector where layer 1 neurons will respond with significant outputs. Therefore if spread is small the radial basis function is very steep, so that the neuron with the weight vector closest to the input will have a much larger output than other neurons. The network tends to respond with the target vector associated with the nearest design input vector.
As spread becomes larger the radial basis function's slope becomes smoother and several neurons can respond to an input vector. The network then acts as if it is taking a weighted average between target vectors whose design input vectors are closest to the new input vector. As spread becomes larger more and more neurons contribute to the average, with the result that the network function becomes smoother.
Design (newgrnn)
You can use the function newgrnn to create a GRNN. For instance, suppose that three input and three target vectors are defined as
P = [4 5 6];
T = [1.5 3.6 6.7];
You can now obtain a GRNN with
net = newgrnn(P,T);
and simulate it with
P = 4.5;
v = sim(net,P);
You might want to try demogrn1. It shows how to approximate a function with a GRNN.
(Reference: From the Matlab Help file)
%%***********************************************
FOGGRNN Program: FOA+GRNN
%%************************************************
%% EMA Economic Department, Soochow University, Taipei, Taiwan
%%
%% Pan's Original 2D-FOAGRNN
%% Use the FOA to adjust the spread of GRNN
%%
% % Copyright by W-T Pan (2011)
% % Revised by W-Y Lin (2011)
%*************************************
% Begin of Program
% Set parameters
% Clear the operating environment
% Clear the operating environment
clc;
clear all;
clear all;
load TXY.txt;
% for testing length of TXY
LengthofInputdata=length(TXY);
% TXY;
% Input No. of Normalized Data
% Or use mapminmax;
TrainOb=228 % No. of Traning data
TrainOb=228 % No. of Traning data
% LenghtofTrain=length(OP)
P = TXY(1:TrainOb,1:7);
LenghtofTrain=length(P)
P=P'
% Normalized the Data
for i9=1:length(P(:,1))
P(i9,:)=(P(i9,:)-min(P(i9,:)))/(max(P(i9,:))-min(P(i9,:)));
end
NP=P
LtofTrNormal=length(NP);
Ltr=length(NP);
[row,col]=size(TXY);
set=row/5;
row=row-set;
row1=row/2;
%***************************
Lth=length(TXY)
OP = TXY(1:TrainOb,1:7);
LenghtofTrain=length(OP)
NP=NP'
% for testing length of traindata1
traindata1=NP(1:row1,1:col-1);
% length(traindata1);
% for testing length of traindata2
traindata2=NP(row1+1:row,1:col-1);
%length(traindata2);
LenghtofTrain=length(OP)
NP=NP'
% for testing length of traindata1
traindata1=NP(1:row1,1:col-1);
% length(traindata1);
% for testing length of traindata2
traindata2=NP(row1+1:row,1:col-1);
%length(traindata2);
% target of traindata1
t1=NP(1:row1,col);
% target of traindata2
t2=NP(row1+1:row,col);
t1=NP(1:row1,col);
% target of traindata2
t2=NP(row1+1:row,col);
t1=t1'
t2=t2'
tr1=traindata1'
tr2=traindata2'
t2=t2'
tr1=traindata1'
tr2=traindata2'
la=1;
X_axis=rand();
Y_axis=rand();
maxgen=100;
% maxgen=50;
sizepop=10;
% maxgen=50;
sizepop=10;
%*********
for i=1:sizepop
X(i)=X_axis+20*rand()-10;
Y(i)=Y_axis+20*rand()-10;
D(i)=(X(i)^2+Y(i)^2)^0.5;
S(i)=1/D(i);
%***
g=0;
p=S(i); % Learning spread of GRNN
if 0.001>p
p=1;
end
% Cross validation
if la == 1
net=newgrnn(tr1,t1,p);
yc=sim(net,tr2);
y=yc-t2;%
for ii=1:row1
g=g+y(ii)^2;
end
Smell(i)=(g/row1)^0.5; % RMSE
la=2;
else
net=newgrnn(tr2,t2,p);
yc=sim(net,tr1);
y=yc-t1;%
for ii=1:row1
g=g+y(ii)^2;
end
Smell(i)=(g/row1)^0.5; % RMSE
la=1;
end
end
%***
[bestSmell bestindex]=min(Smell);
%%
X_axis=X(bestindex);
Y_axis=Y(bestindex);
bestS=S(bestindex);
Smellbest=bestSmell;
Y_axis=Y(bestindex);
bestS=S(bestindex);
Smellbest=bestSmell;
%
for gen=1:maxgen
gen
bestS
for i=1:sizepop
%
g=0;
X(i)=X_axis+20*rand()-10;
Y(i)=Y_axis+20*rand()-10;
%
D(i)=(X(i)^2+Y(i)^2)^0.5;
%
S(i)=1/D(i);
%
p=S(i); % Learning the spread of GRNN
if 0.001>p
p=1;
end
% Cross validation
if la == 1
net=newgrnn(tr1,t1,p);
yc=sim(net,tr2);
if 0.001>p
p=1;
end
% Cross validation
if la == 1
net=newgrnn(tr1,t1,p);
yc=sim(net,tr2);
y=yc-t2;%
for ii=1:row1
g=g+y(ii)^2;
end
Smell(i)=(g/row1)^0.5; % RMSE
la=2;
else
net=newgrnn(tr2,t2,p);
yc=sim(net,tr1);
y=yc-t1;
for ii=1:row1
g=g+y(ii)^2;
end
Smell(i)=(g/row1)^0.5;
la=1;
end
end
%***
[bestSmell bestindex]=min(Smell); % find the min of RMSE
%***
if bestSmell<Smellbest
X_axis=X(bestindex);
Y_axis=Y(bestindex);
bestS=S(bestindex);
Smellbest=bestSmell;
end
X_axis=X(bestindex);
Y_axis=Y(bestindex);
bestS=S(bestindex);
Smellbest=bestSmell;
end
%
yy(gen)=Smellbest;
Xbest(gen)=X_axis;
Ybest(gen)=Y_axis;
end
%
figure(1)
plot(yy)
title('Optimization process','fontsize',12)
xlabel('Iteration Number','fontsize',12);ylabel('RMSE','fontsize',12);
bestS
Xbest
Ybest
figure(2)
plot(Xbest,Ybest,'b.');
title('Fruit fly flying route','fontsize',14)
xlabel('X-axis','fontsize',12);ylabel('Y-axis','fontsize',12);
%*******Begin to Predict
% TestData
LengthofInputdata=length(TXY)
% Input No. of Normalized Testing Data
% LenghtofAll=length(OP)
P = TXY(1:LengthofInputdata,1:7);
% LenghtofTallData=length(P);
% Length of testing data (All Data Normalized)
% Changed Non-normalized Data into Normalized Data
P=P';
for i9=1:length(P(:,1))
P(i9,:)=(P(i9,:)-min(P(i9,:)))/(max(P(i9,:))-min(P(i9,:)));
end
Nt=P';
% Changed Non-normalized Data into Normalized Data
P=P';
for i9=1:length(P(:,1))
P(i9,:)=(P(i9,:)-min(P(i9,:)))/(max(P(i9,:))-min(P(i9,:)));
end
Nt=P';
% Training Data
TrainData=Nt(1:row,1:col-1);
tr=TrainData';
% tr=[tr1 tr2]
% LTr=length(tr)
% Testing Data
TestData=Nt(row+1:LengthofInputdata,1:col-1);
% predict value of testdata
TestData=Nt(row+1:LengthofInputdata,1:col-1);
% predict value of testdata
% No target Y
test3=TestData';
LengthofTestData=length(TestData)
t3=TXY(row+1:LengthofInputdata,col);
% length_tr3=length(tr3);
test3=TestData';
LengthofTestData=length(TestData)
t3=TXY(row+1:LengthofInputdata,col);
% length_tr3=length(tr3);
% tt=Nt(1:row,col);
tt=[t1 t2];
% Ltt=length(tt)
tt=[t1 t2];
% Ltt=length(tt)
% bestS for parameter p;
p=bestS;
% TrainData put inot grnn
net=newgrnn(tr,tt,p);
p=bestS;
% TrainData put inot grnn
net=newgrnn(tr,tt,p);
%% predict value of testdata
ytest=sim(net,test3);
Y_hat=ytest'
% length_Y_hat=length(Y_hat)
% Predicted output Y_hat normalized
Lny=length(Y_hat);
P = Y_hat(1:Lny,1);
P=P';
LenghtofTrain=length(P)
% Changed Non-normalized Data into Normalized Data
for i9=1:length(P(:,1))
P(i9,:)=(P(i9,:)-min(P(i9,:)))/(max(P(i9,:))-min(P(i9,:)));
end
NPP=P';
Lny=length(Y_hat);
P = Y_hat(1:Lny,1);
P=P';
LenghtofTrain=length(P)
% Changed Non-normalized Data into Normalized Data
for i9=1:length(P(:,1))
P(i9,:)=(P(i9,:)-min(P(i9,:)))/(max(P(i9,:))-min(P(i9,:)));
end
NPP=P';
% target of testdata
Target3=t3;
save Y_hat
% End of Program
Test it!
Good Luck!
References:
1. Pan, W.-T. (2011). Fruit Fly Optimization Algorithm. Taiwan: Tsang Hai Book Publishing Co., ISBN 978-986-6184-70-3. (in chinese).
2.
Nien Benjamin
(2011) Application of Data Mining and Fruit Fly Optimization Algorithm to
Construct Financial Crisis Early Warning Model – A Case Study of Listed
Companies in Taiwan, Master Thesis, Department of Economics, Soochow
University, Taiwan (in chinese), Adviser: Wei-Yuan Lin.
3. Wei-Yuan (2012) "A Hybrid Approach of 3D
Fruit Fly Optimization Algorithm and General Regression Neural Network for Financial
Distress Forecasting ", Jan. 2012, Working paper, Soochow University, Taiwan.
Jing Si Aphorism:
The greater our generosity
the greater our blessings
Soochow University EMA