Discussion:
fmincon error: Supplied objective function must return a scalar value.
(too old to reply)
Alberto Grassi
2017-06-26 17:29:04 UTC
Permalink
Hi! I was minimizing a negative likelihood with 6 parameters and everything was going well, but when I added a 7th parameter, then I obtained the error :
Error using fmincon (line 609)
Supplied objective function must return a scalar
value.
This is what I am trying to minimize:
function [neg_likelihood,nlh_grad]=nlh_garchmidas2(parameter,g1,K,r,X)

% this is negative likihood function

miu=parameter(1);
alpha=parameter(2);
beta=parameter(3);
theta=parameter(4);
w1=parameter(5);
w2=parameter(6);
m=parameter(7);


[T,C]=size(r);


% tau series construction -----------------------------------------------
tau = (m^2) + (theta^2) * ( X * betapolyn2(K,[(K-1):-1:1]',w1,w2) ); % function of theta and w
% -----------------------------------------------------------------------
tau_zero=find(tau<=0);
L_zero=length(tau_zero);
if L_zero >0
tau(tau_zero);
parameter
stop
else
end;

% g series construction --------------------------------------------------
g=zeros(T,1);
g(1)=g1;
for i=2:T
g(i)=(1-alpha-beta)...
+ alpha * ( (r(i-1)-miu)^2 ) / tau(i-1)...
+ beta * g(i-1);
end;
% ------------------------------------------------------------------------


% negative likelihood function --------------------------------------
neg_likelihood= T/2*log(2*pi) + (1/2)*sum( ((r-miu).^2)./(tau.*g) ) + (1/2)*sum( log(tau.*g) );
% -------------------------------------------------------------------

dg_dmiu=zeros(T,1);
dg_dalpha=zeros(T,1);
dg_dbeta=zeros(T,1);
dg_dtheta=zeros(T,1);
dg_dw1=zeros(T,1);
dg_dw2=zeros(T,1);
dg_dm=zeros(T,1);

k_vec=[(K-1):-1:1]';

dtau_dtheta = (2*theta) * ( X * betapolyn2(K,[(K-1):-1:1]',w1,w2) );

Nf=((k_vec/K)'.^(w1-1))*((1-k_vec/K).^(w2-1));
mf1=(((1-k_vec/K)'.^(w2-1))* ((k_vec/K).^(w1-1)).*log(k_vec/K) )/(Nf.^2);
mf2=(((k_vec/K)'.^(w1-1))* ((1-k_vec/K).^(w2-1)).*log(1-k_vec/K) )/(Nf.^2);

dtau_dw1 = (theta^2) * X * ( betapolyn2(K,k_vec,w1,w2).*log(k_vec/K) )/Nf...%%%%%%%%%%%%%%%%%%%%%%%
- (theta^2) * X * betapolyn2(K,k_vec,w1,w2) * mf1;

dtau_dw2 = (theta^2) * X * ( betapolyn2(K,k_vec,w1,w2).*log(1-k_vec/K) )/Nf...
- (theta^2) * X * betapolyn2(K,k_vec,w1,w2) * mf2;
dtau_dm = (2*m);

for i=2:T
dg_dmiu(i) = -2*alpha*(r(i-1)-miu)/tau(i-1) + beta*dg_dmiu(i-1);
dg_dalpha(i) = -1 + ((r(i-1)-miu)^2)/tau(i-1) + beta*dg_dalpha(i-1);
dg_dbeta(i) = -1 + g(i-1) + beta*dg_dbeta(i-1);
dg_dtheta(i) = -alpha*((r(i-1)-miu)^2)/(tau(i-1)^2)*dtau_dtheta(i-1) + beta*dg_dtheta(i-1);
dg_dw1(i) = -alpha*((r(i-1)-miu)^2)/(tau(i-1)^2)*dtau_dw1(i-1) + beta*dg_dw1(i-1);
dg_dw2(i) = -alpha*((r(i-1)-miu)^2)/(tau(i-1)^2)*dtau_dw2(i-1) + beta*dg_dw2(i-1);
dg_dm(i) = -alpha*((r(i-1)-miu)^2)/(tau(i-1)^2)*dtau_dm + beta*dg_dm(i-1);
end

dL_dmiu =(1/2)*( sum( -2*(r-miu)./tau./g ) - sum( ((r-miu).^2)./tau./(g.^2).*dg_dmiu ) + sum( dg_dmiu./g ) );
dL_dalpha =(1/2)*( -sum( ((r-miu).^2)./tau./(g.^2).*dg_dalpha ) + sum( dg_dalpha./g ) );
dL_dbeta =(1/2)*( -sum( ((r-miu).^2)./tau./(g.^2).*dg_dbeta ) + sum( dg_dbeta./g ) );
dL_dtheta =(1/2)*( -sum( ((r-miu).^2)./(tau.^2)./(g.^2).*(dtau_dtheta.*g + tau.*dg_dtheta) ) + sum( (dtau_dtheta.*g + tau.*dg_dtheta)./tau./g ) );
dL_dw1 =(1/2)*( -sum( ((r-miu).^2)./(tau.^2)./(g.^2).*(dtau_dw1.*g + tau.*dg_dw1) ) + sum( (dtau_dw1.*g + tau.*dg_dw1)./tau./g ) );
dL_dw2 =(1/2)*( -sum( ((r-miu).^2)./(tau.^2)./(g.^2).*(dtau_dw2.*g + tau.*dg_dw2) ) + sum( (dtau_dw2.*g + tau.*dg_dw2)./tau./g ) );
dL_dm =(1/2)*( -sum( ((r-miu).^2)./(tau.^2)./(g.^2).*(dtau_dm.*g + tau.*dg_dm) ) + sum( (dtau_dm.*g + tau.*dg_dm)./tau./g ) );

% gradient ---------------------------------------------------------
nlh_grad=[dL_dmiu;dL_dalpha;dL_dbeta;dL_dtheta;dL_dw1;dL_dw2;dL_dm];
% ------------------------------------------------------------------

All went bad when i used w1 and w2 instead of only w. Can someone explain me why?
Thanks in advance
Alan Weiss
2017-06-27 17:09:15 UTC
Permalink
Post by Alberto Grassi
Hi! I was minimizing a negative likelihood with 6 parameters and
everything was going well, but when I added a 7th parameter, then I
Error using fmincon (line 609)
Supplied objective function must return a scalar
value.
*SNIP*
Post by Alberto Grassi
All went bad when i used w1 and w2 instead of only w. Can someone explain me why?
Thanks in advance
I cannot debug your function, because I do not see code for betapolyn2.
I also do not see how you are calling fmincon, nor do I see your options.

There are several possibilities. One is that your function is written
inefficiently, because it always returns two outputs. It should return a
second output only when it is called with the option that asks for the
objective gradient (nargout > 1), as documented:
https://www.mathworks.com/help/optim/ug/writing-scalar-objective-functions.html#bsj1e55
Because of this coding, it is possible that you get an error when you
pass options unexpectedly.

The more likely possibility is that when you use w1 and w2, your
objective function is returning a vector value rather than a scalar
value. Use the debugger to see what kind of value is being returned.
https://www.mathworks.com/help/matlab/debugging-code.html

Sorry that I can't be more specific,

Alan Weiss
MATLAB mathematical toolbox documentation
Alberto Grassi
2017-06-29 16:33:05 UTC
Permalink
Post by Alan Weiss
Post by Alberto Grassi
Hi! I was minimizing a negative likelihood with 6 parameters and
everything was going well, but when I added a 7th parameter, then I
Error using fmincon (line 609)
Supplied objective function must return a scalar
value.
*SNIP*
Post by Alberto Grassi
All went bad when i used w1 and w2 instead of only w. Can someone explain me why?
Thanks in advance
I cannot debug your function, because I do not see code for betapolyn2.
I also do not see how you are calling fmincon, nor do I see your options.
There are several possibilities. One is that your function is written
inefficiently, because it always returns two outputs. It should return a
second output only when it is called with the option that asks for the
https://www.mathworks.com/help/optim/ug/writing-scalar-objective-functions.html#bsj1e55
Because of this coding, it is possible that you get an error when you
pass options unexpectedly.
The more likely possibility is that when you use w1 and w2, your
objective function is returning a vector value rather than a scalar
value. Use the debugger to see what kind of value is being returned.
https://www.mathworks.com/help/matlab/debugging-code.html
Sorry that I can't be more specific,
Alan Weiss
MATLAB mathematical toolbox documentation
Thank you Alan. Here what is missing:
miu_i = 0.00064;
alpha_i = 0.08;
beta_i = 0.90;
theta_i = 0.0001;
w1_i = 5;
w2_i = 9;
m_i = -3;

parameter_i=[miu_i;alpha_i;beta_i;theta_i;w1_i;w2_i;m_i];

g1=1;

options=optimset('Display','iter','MaxFunEvals',3000,'MaxIter',3000,'TolFun',1e-6,'TolX',1e-6);
options =optimset(options,'GradObj','off');
options=optimset(options,'DerivativeCheck','off');
options=optimset(options,'Diagnostics','off');

A=[
0 1 1 0 0 0 0;
0 -1 0 0 0 0 0;
0 0 -1 0 0 0 0;
0 0 0 0 -1 0 0;
0 0 0 0 0 -1 0;
0 0 0 0 1 1 0;
0 0 0 0 0 0 -1;
];

b=[
.999999999;
0;
0;
-1;
-1;
2000;
-.00000001
];


disp('Optimization Results ------------------')
[parameter,fval,exitflag,output,lambda,grad,hessian] = fmincon('nlh_garchmidas2',parameter_i,A,b,[],[],[],[],[],options,g1,K,r,X);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
And the betapolyn2 function:
function [s]=betapolyn2(K,j,w1,w2)
%
% Beta weighting function in the MIDAS filter
%
j_vec=[(K-1):-1:1]';
N=((j_vec/K).^(w1-1))*(((1-j_vec/K))'.^(w2-1));
s=((j/K).^(w1-1))*((1-j/K)'.^(w2-1))/N;


If you could help me debugging it would be great. Thanks
Alan Weiss
2017-06-29 17:36:13 UTC
Permalink
Post by Alberto Grassi
Post by Alan Weiss
Post by Alberto Grassi
Hi! I was minimizing a negative likelihood with 6 parameters and
everything was going well, but when I added a 7th parameter, then I
Error using fmincon (line 609)
Supplied objective function must return a scalar
value.
*SNIP*
Post by Alberto Grassi
All went bad when i used w1 and w2 instead of only w. Can someone explain me why?
Thanks in advance
I cannot debug your function, because I do not see code for
betapolyn2. I also do not see how you are calling fmincon, nor do I
see your options.
There are several possibilities. One is that your function is written
inefficiently, because it always returns two outputs. It should return
a second output only when it is called with the option that asks for
https://www.mathworks.com/help/optim/ug/writing-scalar-objective-functions.html#bsj1e55
Because of this coding, it is possible that you get an error when you
pass options unexpectedly.
The more likely possibility is that when you use w1 and w2, your
objective function is returning a vector value rather than a scalar
value. Use the debugger to see what kind of value is being returned.
https://www.mathworks.com/help/matlab/debugging-code.html
Sorry that I can't be more specific,
Alan Weiss
MATLAB mathematical toolbox documentation
miu_i = 0.00064;
alpha_i = 0.08;
beta_i = 0.90;
theta_i = 0.0001;
w1_i = 5;
w2_i = 9;
m_i = -3;
parameter_i=[miu_i;alpha_i;beta_i;theta_i;w1_i;w2_i;m_i];
g1=1;
options=optimset('Display','iter','MaxFunEvals',3000,'MaxIter',3000,'TolFun',1e-6,'TolX',1e-6);
options =optimset(options,'GradObj','off');
options=optimset(options,'DerivativeCheck','off');
options=optimset(options,'Diagnostics','off');
A=[
0 1 1 0 0 0 0;
0 -1 0 0 0 0 0;
0 0 -1 0 0 0 0;
0 0 0 0 -1 0 0;
0 0 0 0 0 -1 0;
0 0 0 0 1 1 0;
0 0 0 0 0 0 -1;
];
b=[
.999999999;
0; 0;
-1;
-1;
2000;
-.00000001
];
disp('Optimization Results ------------------')
[parameter,fval,exitflag,output,lambda,grad,hessian] =
fmincon('nlh_garchmidas2',parameter_i,A,b,[],[],[],[],[],options,g1,K,r,X);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [s]=betapolyn2(K,j,w1,w2)
%
% Beta weighting function in the MIDAS filter
%
j_vec=[(K-1):-1:1]';
N=((j_vec/K).^(w1-1))*(((1-j_vec/K))'.^(w2-1));
s=((j/K).^(w1-1))*((1-j/K)'.^(w2-1))/N;
If you could help me debugging it would be great. Thanks
You set the GradObj option to off, meaning your objective function does
not make use of gradient information. Yet you take the trouble to
calculate the gradient. This is inconsistent. Either set the GradObj
option to on, or do not return the gradient in the objective function.

Also, you set some bounds by setting linear inequalities. Don't do that.
Set your bounds explicitly, and leave the linear inequalities for
expressions involving at least two variables.

You are passing extra parameters using an older, undocumented syntax. I
suggest that you use the documented syntax:
https://www.mathworks.com/help/optim/ug/passing-extra-parameters.html

As for debugging, I once again suggest that you learn to use the debugger.

Good luck,

Alan Weiss
MATLAB mathematical toolbox documentation
Alberto Grassi
2017-07-07 07:21:07 UTC
Permalink
Post by Alan Weiss
Post by Alberto Grassi
Post by Alan Weiss
Post by Alberto Grassi
Hi! I was minimizing a negative likelihood with 6 parameters and
everything was going well, but when I added a 7th parameter, then I
Error using fmincon (line 609)
Supplied objective function must return a scalar
value.
*SNIP*
Post by Alberto Grassi
All went bad when i used w1 and w2 instead of only w. Can someone explain me why?
Thanks in advance
I cannot debug your function, because I do not see code for
betapolyn2. I also do not see how you are calling fmincon, nor do I
see your options.
There are several possibilities. One is that your function is written
inefficiently, because it always returns two outputs. It should return
a second output only when it is called with the option that asks for
https://www.mathworks.com/help/optim/ug/writing-scalar-objective-functions.html#bsj1e55
Because of this coding, it is possible that you get an error when you
pass options unexpectedly.
The more likely possibility is that when you use w1 and w2, your
objective function is returning a vector value rather than a scalar
value. Use the debugger to see what kind of value is being returned.
https://www.mathworks.com/help/matlab/debugging-code.html
Sorry that I can't be more specific,
Alan Weiss
MATLAB mathematical toolbox documentation
miu_i = 0.00064;
alpha_i = 0.08;
beta_i = 0.90;
theta_i = 0.0001;
w1_i = 5;
w2_i = 9;
m_i = -3;
parameter_i=[miu_i;alpha_i;beta_i;theta_i;w1_i;w2_i;m_i];
g1=1;
options=optimset('Display','iter','MaxFunEvals',3000,'MaxIter',3000,'TolFun',1e-6,'TolX',1e-6);
options =optimset(options,'GradObj','off');
options=optimset(options,'DerivativeCheck','off');
options=optimset(options,'Diagnostics','off');
A=[
0 1 1 0 0 0 0;
0 -1 0 0 0 0 0;
0 0 -1 0 0 0 0;
0 0 0 0 -1 0 0;
0 0 0 0 0 -1 0;
0 0 0 0 1 1 0;
0 0 0 0 0 0 -1;
];
b=[
.999999999;
0; 0;
-1;
-1;
2000;
-.00000001
];
disp('Optimization Results ------------------')
[parameter,fval,exitflag,output,lambda,grad,hessian] =
fmincon('nlh_garchmidas2',parameter_i,A,b,[],[],[],[],[],options,g1,K,r,X);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [s]=betapolyn2(K,j,w1,w2)
%
% Beta weighting function in the MIDAS filter
%
j_vec=[(K-1):-1:1]';
N=((j_vec/K).^(w1-1))*(((1-j_vec/K))'.^(w2-1));
s=((j/K).^(w1-1))*((1-j/K)'.^(w2-1))/N;
If you could help me debugging it would be great. Thanks
You set the GradObj option to off, meaning your objective function does
not make use of gradient information. Yet you take the trouble to
calculate the gradient. This is inconsistent. Either set the GradObj
option to on, or do not return the gradient in the objective function.
Also, you set some bounds by setting linear inequalities. Don't do that.
Set your bounds explicitly, and leave the linear inequalities for
expressions involving at least two variables.
You are passing extra parameters using an older, undocumented syntax. I
https://www.mathworks.com/help/optim/ug/passing-extra-parameters.html
As for debugging, I once again suggest that you learn to use the debugger.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation
Thank you Alan! Now I'm still working on the 6-parameters model and I've improved it thanks to your advices and it's working faster.
I need to add one last question: when fmincon calculates the hessian, I take the diagonal of its inverse to obtain the variances of my parameters but sometimes I end up with negative variances (because the elements of the principal diagonal of the hessian are positive but the other elements not, so when I invert it I obtain a matrix with some negative elements on the principal diagonal). Would you be so kind to tell me how can I fix this last problem?

Thanks in advance,
Alberto
Alan Weiss
2017-07-07 12:24:57 UTC
Permalink
Post by Alberto Grassi
Post by Alan Weiss
Post by Alberto Grassi
Post by Alan Weiss
Post by Alberto Grassi
Hi! I was minimizing a negative likelihood with 6 parameters and
everything was going well, but when I added a 7th parameter, then I
Error using fmincon (line 609)
Supplied objective function must return a scalar
value.
*SNIP*
Post by Alberto Grassi
All went bad when i used w1 and w2 instead of only w. Can someone
explain me why?
Thanks in advance
I cannot debug your function, because I do not see code for
betapolyn2. I also do not see how you are calling fmincon, nor do I
see your options.
There are several possibilities. One is that your function is written
inefficiently, because it always returns two outputs. It should return
a second output only when it is called with the option that asks for
https://www.mathworks.com/help/optim/ug/writing-scalar-objective-functions.html#bsj1e55
Post by Alberto Grassi
Post by Alan Weiss
Because of this coding, it is possible that you get an error when you
pass options unexpectedly.
The more likely possibility is that when you use w1 and w2, your
objective function is returning a vector value rather than a scalar
value. Use the debugger to see what kind of value is being returned.
https://www.mathworks.com/help/matlab/debugging-code.html
Sorry that I can't be more specific,
Alan Weiss
MATLAB mathematical toolbox documentation
miu_i = 0.00064;
alpha_i = 0.08;
beta_i = 0.90;
theta_i = 0.0001;
w1_i = 5;
w2_i = 9;
m_i = -3;
parameter_i=[miu_i;alpha_i;beta_i;theta_i;w1_i;w2_i;m_i];
g1=1;
options=optimset('Display','iter','MaxFunEvals',3000,'MaxIter',3000,'TolFun',1e-6,'TolX',1e-6);
Post by Alberto Grassi
options =optimset(options,'GradObj','off');
options=optimset(options,'DerivativeCheck','off');
options=optimset(options,'Diagnostics','off');
A=[
0 1 1 0 0 0 0;
0 -1 0 0 0 0 0;
0 0 -1 0 0 0 0;
0 0 0 0 -1 0 0;
0 0 0 0 0 -1 0;
0 0 0 0 1 1 0;
0 0 0 0 0 0 -1;
];
b=[
.999999999;
0; 0;
-1;
-1;
2000;
-.00000001
];
disp('Optimization Results ------------------')
[parameter,fval,exitflag,output,lambda,grad,hessian] =
fmincon('nlh_garchmidas2',parameter_i,A,b,[],[],[],[],[],options,g1,K,r,X);
Post by Alberto Grassi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [s]=betapolyn2(K,j,w1,w2)
%
% Beta weighting function in the MIDAS filter
%
j_vec=[(K-1):-1:1]';
N=((j_vec/K).^(w1-1))*(((1-j_vec/K))'.^(w2-1));
s=((j/K).^(w1-1))*((1-j/K)'.^(w2-1))/N;
If you could help me debugging it would be great. Thanks
You set the GradObj option to off, meaning your objective function
does not make use of gradient information. Yet you take the trouble to
calculate the gradient. This is inconsistent. Either set the GradObj
option to on, or do not return the gradient in the objective function.
Also, you set some bounds by setting linear inequalities. Don't do
that. Set your bounds explicitly, and leave the linear inequalities
for expressions involving at least two variables.
You are passing extra parameters using an older, undocumented syntax.
https://www.mathworks.com/help/optim/ug/passing-extra-parameters.html
As for debugging, I once again suggest that you learn to use the debugger.
Good luck,
Alan Weiss
MATLAB mathematical toolbox documentation
Thank you Alan! Now I'm still working on the 6-parameters model and I've
improved it thanks to your advices and it's working faster.
I need to add one last question: when fmincon calculates the hessian, I
take the diagonal of its inverse to obtain the variances of my
parameters but sometimes I end up with negative variances (because the
elements of the principal diagonal of the hessian are positive but the
other elements not, so when I invert it I obtain a matrix with some
negative elements on the principal diagonal). Would you be so kind to
tell me how can I fix this last problem?
Thanks in advance,
Alberto
I am sorry to say that you should not count on the returned fmincon
Hessian being accurate. See
https://www.mathworks.com/help/optim/ug/hessian.html

If your returned solution is not at a constraint boundary, then you can
use fminunc, starting from the final fmincon solution, to get an
accurate Hessian. Or there are tools on the File Exchange that can
return an accurate Hessian, such as
https://www.mathworks.com/matlabcentral/fileexchange/13490-adaptive-robust-numerical-differentiation

If your returned solution is at a constraint boundary, then I don't know
how you can get accurate confidence intervals, even theoretically.

Good luck,

Alan Weiss
MATLAB mathematical toolbox documentation

Loading...