nmse = 0.042
Last active
April 10, 2023 07:33
-
-
Save caub/9462102 to your computer and use it in GitHub Desktop.
Santa-Fe regression with matlab
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
%% Kernel Recursive Least Square demo on Santa-fe laser series | |
% using krls resources on this page http://web.mit.edu/~wingated/www/resources.html | |
%% load data and prepare inputs | |
ydat = [urlread('http://www-psych.stanford.edu/~andreas/Time-Series/SantaFe/A.dat') urlread('http://www-psych.stanford.edu/~andreas/Time-Series/SantaFe/A.cont')]; | |
y = textscan(ydat,'%f'); | |
y=y{1}(1:1100)/256; | |
% y = (sin((1:115)*pi/5)+0.05*randn(1,115))'; | |
% set an auto-regressive matrix for the inputs | |
X = zeros(length(y), 40); | |
for i=1:size(X,2) | |
X(:,i) = [ repmat(NaN, i, 1); y(1:end-i)]; | |
end | |
%% kernel-rls regression | |
%parameters | |
% - here we test them directly, normally we must allocate room for | |
% cross-validation in order to find optimal paramaters in a grid. | |
% Once the best parameters are assessed they can evaluated on the | |
% training+validation set | |
kernel_func = @rbf4nn; % the Gaussian kernel | |
kparam = 0.9; % the variance of the Gaussian | |
ald_thresh = 0.01; % the linear dependence threshold | |
n = 40; % uto-regressive window size | |
ltest = 100; | |
Ytest = y(end-ltest+1:end); | |
X_ = X(n+1:end-ltest, 1:n); | |
Y_ = y(n+1:end-ltest); | |
%we could try to train with missing values, starting at 2 instead of n+1 | |
ltraining = size(X_,1); | |
lvalidation= 0; | |
kp = krls_init( kernel_func, kparam, ald_thresh, X_(1,:)', Y_(1) ); | |
for i = 2:ltraining | |
kp = krls( kp, X_(i,:)', Y_(i) ); | |
end; | |
v = [Y_(ltraining) X_(ltraining, 1:n-1)]; | |
prediction = zeros(ltest, 1); | |
for j=1:ltest | |
prediction(j) = krls_query( kp, v' ); | |
v = [prediction(j) v(1:n-1)]; | |
end | |
testNMSE = 1-goodnessOfFit(Ytest, prediction, 'NMSE') | |
%plot(1:length(Y_),Y_,'b-',1:length(Y_),r.f,'r--') | |
%figure,plot(1:length(Yvalidation),Yvalidation,'b-',1:length(Yvalidation),r_.f,'r--') | |
%figure | |
plot(1:length(Ytest),Ytest,'b-',1:length(Ytest),prediction,'r--') | |
%% reinforce training | |
% detailled p:28 of http://webee.technion.ac.il/people/rmeir/Publications/KrlsReport.pdf | |
ireinforce = 10; | |
nmses = repmat(testNMSE,1,ireinforce); | |
predictions = repmat(prediction,1,ireinforce); | |
for i=2:ireinforce | |
% make new inputs, by injecting the fitted values | |
fitted = zeros(ltraining,1); | |
for j = i:ltraining | |
fitted(j)=krls_query( kp, X_(j,:)' ); | |
end | |
X_(2:end,2:end) = X_(1:end-1,1:end-1); | |
X_(i:end,1) = fitted(1:end-i+1); | |
for j = i:ltraining | |
kp = krls( kp, X_(j,:)', Y_(j) ); | |
end; | |
v = [Y_(ltraining) X_(ltraining, 1:n-1)]; | |
prediction = zeros(ltest, 1); | |
for j=1:ltest | |
prediction(j) = krls_query( kp, v' ); | |
v = [prediction(j) v(1:n-1)]; | |
end | |
nmses(i) = 1-goodnessOfFit(Ytest, prediction, 'NMSE'); | |
predictions(:,i) = prediction; | |
%plot(1:length(Ytest),Ytest,'b-',1:length(Ytest),predictions,'r--') | |
end | |
%% plot best reinforcement | |
[minNMSE, iNMSE] = min(nmse); | |
figure | |
plot(1:length(Ytest),Ytest,'b-',1:length(Ytest),predictions(:,iNMSE),'r--') | |
fprintf('nmse= %f', minNMSE); |
Thank you for your response.
Actually I tried to rewrite the code line-by-line to understand what is happening at each line. And finally, I was able to achieve the results you got. However, I am not sure why the “KRLS paper” was able to achieve a lower error (NMSE = 0.026)
me too, that's the best I could achieve
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I don't remember at all about this (it's 10 years old), done with matlab I think, I can see
nmses
is defined, but notnmse
, that's strange, try testing out, replacenmse
bynmses
or something