QUÂN HỒNG
2013-11-14 17:42:09 UTC
Dear all!
I have a question would like request to your kindness.
I am studying ANN for predicting water level in an area.
The data I have just simply like this: I have 6 Columns included
( Date| Rainfall | Min-Temperature| Max-Temperature| Water lever| Solar Radian)and 365 rows (for 1 year observed).
--Question is that I need to predict data for 1 or many year later base on the historical data given (actually I need to have data for ~20 years ago.
I tried some Code of Matlab like Time Series Prediction, but the Matlab could not defined some terms. I am using Matlab 7.8 R2009a. Anyway, I have a code wrote below.
-But here I don't understand which values are Inputs and which are Targets in my data set?
-How to show the graph presents the prediction data?
- Which parameters set for the results getting better?
- When the regression graph appeared after trained, How to know it is good result?
here is my unknowledge code, Pls Fix again help me thanks:
-----------------------------
%NEURAL NETWORK
%Feed-Forward Network
%Reading data from an Excel file
%Using function 'xlsread' to read an Excel file
% For MLP training
%Import Inputs data
Data_Inputs=xlsread('data.xlsx'); % For training data (6 col x 356 row)
Testing_Data=xlsread('data.xlsx'); % For testing data (6 col x 356 row)
%The training data sample are randmonized by using the function'randperm'
Shuffling_Inputs=Data_Inputs(randperm(365),2:6); % integers (training sample) from 2-6 columns
Training_Set=Shuffling_Inputs(1:365,2:5)%specific training set, 2:5 column 2-5
Target_Set=Shuffling_Inputs(1:365,6) %specific target set, 6 is the last columns
Testing_Set=Testing_Data(1:365,2:5)%specific testing set
Testing_Target_Set=Testing_Data(1:365,6)% Specific testing set, target
%Convert data to row vectors
[pn,ps]=mapstd(Training_Set');
[tn,ts]=mapstd(Target_Set');
%pn and tn contain normalized values of inputs and output respectively
%ps and ts contain the mean and standard deviations of the orginal inputs
%and targets
%Creates a Feed-Forward Back-propagation network, using function'newff'
net=newff(pn,tn,[4,100],{'logsig','purelin'});
%pn and tn are normalized input and output.
%[4] implies that the network structure consists of one hidden layer and
%'4' neurons.
%For '100' hiddend layers we use [4,100]
%{tf} denotes the transfer function of the i layers
%'tagsig','purelin','logsig'
%Configured network
net.trainFcn='trainlm';
net.trainparam.min_grad=0.00000001;
net.trainParam.epochs=1000;
net.trainParam.lr=0.4;
net.trainParam.max_fail=20;
%'trainFcn': defined the function used to train the network. It can be set
%the name of any training function, for example LM='trainlm';%
%:Lavenberg-Marquardt back-propagation
%'trainparam.min_grad': denotes the minimum performance gradient
%'trainParam.epochs': denotes the maximum number of epochs to train
%'trainParam.lr': denotes the learning rates
%'trainParam.max_fail': denotes the maximum validation failures
%-------------------------------------------------
%For RBF training
%Use the function 'newrb' to creating an RBF network
%net=newrb(pn,tn,0.01,50,200,50);
%where
%pn and tn are input and target, respectively.
%'goal': detones the mean square error goal, it set here tobe 0.01
%'spread': presents the spread of radial basis function, changed between
%1-60 and donoted here is 'i'
%'mn': maximum number of neurons to add between 5 and 600 and denoted here
%'j'.
%'df': represents the number of neurons to add between displays and it is
%set to 50
%-------------------------------------------------
% MLP network trained using the 'trainFcn' and 'trainParam' train function
% MLP network trainning
[net,tr]=train(net,pn,tn);
%save(NetworkName,'net'); % saved in to the MATLAB, NetworkName can use set depending the designers
%Simulate Network
Outputs=sim(net,pn); % Simulate network
%Created a Progression
plotperf(tr)
plotfit(net,pn,tn)
plotregression(tn,Outputs); % plot trainning data
I have a question would like request to your kindness.
I am studying ANN for predicting water level in an area.
The data I have just simply like this: I have 6 Columns included
( Date| Rainfall | Min-Temperature| Max-Temperature| Water lever| Solar Radian)and 365 rows (for 1 year observed).
--Question is that I need to predict data for 1 or many year later base on the historical data given (actually I need to have data for ~20 years ago.
I tried some Code of Matlab like Time Series Prediction, but the Matlab could not defined some terms. I am using Matlab 7.8 R2009a. Anyway, I have a code wrote below.
-But here I don't understand which values are Inputs and which are Targets in my data set?
-How to show the graph presents the prediction data?
- Which parameters set for the results getting better?
- When the regression graph appeared after trained, How to know it is good result?
here is my unknowledge code, Pls Fix again help me thanks:
-----------------------------
%NEURAL NETWORK
%Feed-Forward Network
%Reading data from an Excel file
%Using function 'xlsread' to read an Excel file
% For MLP training
%Import Inputs data
Data_Inputs=xlsread('data.xlsx'); % For training data (6 col x 356 row)
Testing_Data=xlsread('data.xlsx'); % For testing data (6 col x 356 row)
%The training data sample are randmonized by using the function'randperm'
Shuffling_Inputs=Data_Inputs(randperm(365),2:6); % integers (training sample) from 2-6 columns
Training_Set=Shuffling_Inputs(1:365,2:5)%specific training set, 2:5 column 2-5
Target_Set=Shuffling_Inputs(1:365,6) %specific target set, 6 is the last columns
Testing_Set=Testing_Data(1:365,2:5)%specific testing set
Testing_Target_Set=Testing_Data(1:365,6)% Specific testing set, target
%Convert data to row vectors
[pn,ps]=mapstd(Training_Set');
[tn,ts]=mapstd(Target_Set');
%pn and tn contain normalized values of inputs and output respectively
%ps and ts contain the mean and standard deviations of the orginal inputs
%and targets
%Creates a Feed-Forward Back-propagation network, using function'newff'
net=newff(pn,tn,[4,100],{'logsig','purelin'});
%pn and tn are normalized input and output.
%[4] implies that the network structure consists of one hidden layer and
%'4' neurons.
%For '100' hiddend layers we use [4,100]
%{tf} denotes the transfer function of the i layers
%'tagsig','purelin','logsig'
%Configured network
net.trainFcn='trainlm';
net.trainparam.min_grad=0.00000001;
net.trainParam.epochs=1000;
net.trainParam.lr=0.4;
net.trainParam.max_fail=20;
%'trainFcn': defined the function used to train the network. It can be set
%the name of any training function, for example LM='trainlm';%
%:Lavenberg-Marquardt back-propagation
%'trainparam.min_grad': denotes the minimum performance gradient
%'trainParam.epochs': denotes the maximum number of epochs to train
%'trainParam.lr': denotes the learning rates
%'trainParam.max_fail': denotes the maximum validation failures
%-------------------------------------------------
%For RBF training
%Use the function 'newrb' to creating an RBF network
%net=newrb(pn,tn,0.01,50,200,50);
%where
%pn and tn are input and target, respectively.
%'goal': detones the mean square error goal, it set here tobe 0.01
%'spread': presents the spread of radial basis function, changed between
%1-60 and donoted here is 'i'
%'mn': maximum number of neurons to add between 5 and 600 and denoted here
%'j'.
%'df': represents the number of neurons to add between displays and it is
%set to 50
%-------------------------------------------------
% MLP network trained using the 'trainFcn' and 'trainParam' train function
% MLP network trainning
[net,tr]=train(net,pn,tn);
%save(NetworkName,'net'); % saved in to the MATLAB, NetworkName can use set depending the designers
%Simulate Network
Outputs=sim(net,pn); % Simulate network
%Created a Progression
plotperf(tr)
plotfit(net,pn,tn)
plotregression(tn,Outputs); % plot trainning data