八度:逻辑回归:fmincg 和 fminunc 之间的差异
Posted
技术标签:
【中文标题】八度:逻辑回归:fmincg 和 fminunc 之间的差异【英文标题】:Octave : logistic regression : difference between fmincg and fminunc 【发布时间】:2012-08-20 08:29:12 【问题描述】:我经常使用fminunc
来解决逻辑回归问题。我在网上看到Andrew Ng 使用fmincg
而不是fminunc
,具有相同的论点。结果不同,通常fmincg
更准确,但不会太多。 (我正在将 fmincg 函数 fminunc 的结果与相同的数据进行比较)
那么,我的问题是:这两个函数有什么区别?每个函数都实现了什么算法? (现在,我只是使用这些函数而不知道它们是如何工作的)。
谢谢:)
【问题讨论】:
fmincg
在哪里?它不在 octave 核心中,也不在任何 Forge 包中。
【参考方案1】:
您必须查看 fmincg
的代码,因为它不是 Octave 的一部分。经过一番搜索,我发现它是 Coursera 机器学习类提供的一个函数文件,作为作业的一部分。阅读 this question 上的 cmets 和答案,了解有关算法的讨论。
【讨论】:
哦。谢谢 :) 我没有注意到我的文件夹中的 fmincg 中有一个文件。非常感谢 那个问题的评论没有说明fmincg
;他们只是在讨论与这个问题无关的fminsearch
。 @emrea 的回答表明 fmincg
正在实施 conjugate gradient method。【参考方案2】:
与此处暗示 fmincg 和 fminunc 之间的主要区别是准确性或速度的其他答案相比,对于某些应用程序而言,最重要的区别可能是内存效率。在 Andrew Ng 在 Coursera 的机器学习课程的编程练习 4(即神经网络训练)中,ex4.m 中关于 fmincg 的评论是
%% =================== 第 8 部分:训练 NN =================== % 您现在已经实现了训练神经网络所需的所有代码 % 网络。为了训练您的神经网络,我们现在将使用“fmincg”,它 % 是一个与“fminunc”类似的函数。回想一下这些 % 高级优化器能够有效地训练我们的成本函数,因为 % 只要我们为他们提供梯度计算。
和原来的海报一样,我也很好奇 ex4.m 的结果在使用 fminunc 而不是 fmincg 时可能会有什么不同。所以我尝试替换 fmincg 调用
options = optimset('MaxIter', 50);
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
通过以下对 fminunc 的调用
options = optimset('GradObj', 'on', 'MaxIter', 50);
[nn_params, cost, exit_flag] = fminunc(costFunction, initial_nn_params, options);
但从在 Windows 上运行的 32 位 Octave 构建中收到以下错误消息:
错误:内存耗尽或请求的大小对于 Octave 的索引范围来说太大 type -- 试图返回提示符
在 Windows 上运行的 32 位版本的 MATLAB 提供了更详细的错误消息:
使用查找出错 记不清。键入 HELP MEMORY 作为您的选项。 spones 中的错误(第 14 行) [i,j] = 查找(S); 颜色错误(第 26 行) J = spones(J); sfminbx 中的错误(第 155 行) 组 = 颜色(Hstr,p); fminunc 中的错误(第 408 行) [x,FVAL,~,EXITFLAG,OUTPUT,GRAD,HESSIAN] = sfminbx(funfcn,x,l,u, ... ex4 中的错误(第 205 行) [nn_params, cost, exit_flag] = fminunc(costFunction, initial_nn_params, options);
我的笔记本电脑上的 MATLAB memory 命令报告:
最大可能数组:2046 MB(2.146e+09 字节)* 可用于所有阵列的内存:3402 MB(3.568e+09 字节)** MATLAB 使用的内存:373 MB(3.910e+08 字节) 物理内存 (RAM):3561 MB(3.734e+09 字节) * 受限于可用的连续虚拟地址空间。 ** 受限于可用的虚拟地址空间。
我之前在想 Ng 教授选择使用 fmincg 来训练 ex4.m 神经网络(有 400 个输入特征,401 个包括偏置输入)来提高训练速度。然而,现在我相信他使用 fmincg 的原因是为了提高内存效率,足以允许在 Octave/MATLAB 的 32 位构建上执行训练。关于获得在 Windows 操作系统上运行的 64 位 Octave 构建的必要工作的简短讨论是here.
【讨论】:
【参考方案3】:根据 Andrew Ng 本人的说法,fmincg
不是用来获得更准确的结果(请记住,您的成本函数在任何一种情况下都是相同的,并且您的假设不会更简单或更复杂),而是因为它在对特别复杂的假设进行梯度下降。他本人似乎使用fminunc
,假设的特征很少,但fmincg
,它有数百个。
【讨论】:
当您说的术语较少时,您的意思是什么?特征和参数的数量?fmincg
的工作方式与fminunc
类似,但在处理大量参数时效率更高。【参考方案4】:
为什么 fmincg 有效?
这里是一份带有 cmets 的源代码副本,解释了所使用的各种算法。这真是太棒了,就像孩子的大脑在学习区分狗和椅子时所做的一样。
这是 fmincg.m 的 Octave 源代码。
function [X, fX, i] = fmincg(f, X, options, P1, P2, P3, P4, P5)
% Minimize a continuous differentialble multivariate function. Starting point
% is given by "X" (D by 1), and the function named in the string "f", must
% return a function value and a vector of partial derivatives. The Polack-
% Ribiere flavour of conjugate gradients is used to compute search directions,
% and a line search using quadratic and cubic polynomial approximations and the
% Wolfe-Powell stopping criteria is used together with the slope ratio method
% for guessing initial step sizes. Additionally a bunch of checks are made to
% make sure that exploration is taking place and that extrapolation will not
% be unboundedly large. The "length" gives the length of the run: if it is
% positive, it gives the maximum number of line searches, if negative its
% absolute gives the maximum allowed number of function evaluations. You can
% (optionally) give "length" a second component, which will indicate the
% reduction in function value to be expected in the first line-search (defaults
% to 1.0). The function returns when either its length is up, or if no further
% progress can be made (ie, we are at a minimum, or so close that due to
% numerical problems, we cannot get any closer). If the function terminates
% within a few iterations, it could be an indication that the function value
% and derivatives are not consistent (ie, there may be a bug in the
% implementation of your "f" function). The function returns the found
% solution "X", a vector of function values "fX" indicating the progress made
% and "i" the number of iterations (line searches or function evaluations,
% depending on the sign of "length") used.
%
% Usage: [X, fX, i] = fmincg(f, X, options, P1, P2, P3, P4, P5)
%
% See also: checkgrad
%
% Copyright (C) 2001 and 2002 by Carl Edward Rasmussen. Date 2002-02-13
%
%
% (C) Copyright 1999, 2000 & 2001, Carl Edward Rasmussen
%
% Permission is granted for anyone to copy, use, or modify these
% programs and accompanying documents for purposes of research or
% education, provided this copyright notice is retained, and note is
% made of any changes that have been made.
%
% These programs and documents are distributed without any warranty,
% express or implied. As the programs were written for research
% purposes only, they have not been tested to the degree that would be
% advisable in any important application. All use of these programs is
% entirely at the user's own risk.
%
% [ml-class] Changes Made:
% 1) Function name and argument specifications
% 2) Output display
%
% Read options
if exist('options', 'var') && ~isempty(options) && isfield(options, 'MaxIter')
length = options.MaxIter;
else
length = 100;
end
RHO = 0.01; % a bunch of constants for line searches
SIG = 0.5; % RHO and SIG are the constants in the Wolfe-Powell conditions
INT = 0.1; % don't reevaluate within 0.1 of the limit of the current bracket
EXT = 3.0; % extrapolate maximum 3 times the current bracket
MAX = 20; % max 20 function evaluations per line search
RATIO = 100; % maximum allowed slope ratio
argstr = ['feval(f, X']; % compose string used to call function
for i = 1:(nargin - 3)
argstr = [argstr, ',P', int2str(i)];
end
argstr = [argstr, ')'];
if max(size(length)) == 2, red=length(2); length=length(1); else red=1; end
S=['Iteration '];
i = 0; % zero the run length counter
ls_failed = 0; % no previous line search has failed
fX = [];
[f1 df1] = eval(argstr); % get function value and gradient
i = i + (length<0); % count epochs?!
s = -df1; % search direction is steepest
d1 = -s'*s; % this is the slope
z1 = red/(1-d1); % initial step is red/(|s|+1)
while i < abs(length) % while not finished
i = i + (length>0); % count iterations?!
X0 = X; f0 = f1; df0 = df1; % make a copy of current values
X = X + z1*s; % begin line search
[f2 df2] = eval(argstr);
i = i + (length<0); % count epochs?!
d2 = df2'*s;
f3 = f1; d3 = d1; z3 = -z1; % initialize point 3 equal to point 1
if length>0, M = MAX; else M = min(MAX, -length-i); end
success = 0; limit = -1; % initialize quanteties
while 1
while ((f2 > f1+z1*RHO*d1) | (d2 > -SIG*d1)) & (M > 0)
limit = z1; % tighten the bracket
if f2 > f1
z2 = z3 - (0.5*d3*z3*z3)/(d3*z3+f2-f3); % quadratic fit
else
A = 6*(f2-f3)/z3+3*(d2+d3); % cubic fit
B = 3*(f3-f2)-z3*(d3+2*d2);
z2 = (sqrt(B*B-A*d2*z3*z3)-B)/A; % numerical error possible - ok!
end
if isnan(z2) | isinf(z2)
z2 = z3/2; % if we had a numerical problem then bisect
end
z2 = max(min(z2, INT*z3),(1-INT)*z3); % don't accept too close to limits
z1 = z1 + z2; % update the step
X = X + z2*s;
[f2 df2] = eval(argstr);
M = M - 1; i = i + (length<0); % count epochs?!
d2 = df2'*s;
z3 = z3-z2; % z3 is now relative to the location of z2
end
if f2 > f1+z1*RHO*d1 | d2 > -SIG*d1
break; % this is a failure
elseif d2 > SIG*d1
success = 1; break; % success
elseif M == 0
break; % failure
end
A = 6*(f2-f3)/z3+3*(d2+d3); % make cubic extrapolation
B = 3*(f3-f2)-z3*(d3+2*d2);
z2 = -d2*z3*z3/(B+sqrt(B*B-A*d2*z3*z3)); % num. error possible - ok!
if ~isreal(z2) | isnan(z2) | isinf(z2) | z2 < 0 % num prob or wrong sign?
if limit < -0.5 % if we have no upper limit
z2 = z1 * (EXT-1); % the extrapolate the maximum amount
else
z2 = (limit-z1)/2; % otherwise bisect
end
elseif (limit > -0.5) & (z2+z1 > limit) % extraplation beyond max?
z2 = (limit-z1)/2; % bisect
elseif (limit < -0.5) & (z2+z1 > z1*EXT) % extrapolation beyond limit
z2 = z1*(EXT-1.0); % set to extrapolation limit
elseif z2 < -z3*INT
z2 = -z3*INT;
elseif (limit > -0.5) & (z2 < (limit-z1)*(1.0-INT)) % too close to limit?
z2 = (limit-z1)*(1.0-INT);
end
f3 = f2; d3 = d2; z3 = -z2; % set point 3 equal to point 2
z1 = z1 + z2; X = X + z2*s; % update current estimates
[f2 df2] = eval(argstr);
M = M - 1; i = i + (length<0); % count epochs?!
d2 = df2'*s;
end % end of line search
if success % if line search succeeded
f1 = f2; fX = [fX' f1]';
fprintf('%s %4i | Cost: %4.6e\r', S, i, f1);
s = (df2'*df2-df1'*df2)/(df1'*df1)*s - df2; % Polack-Ribiere direction
tmp = df1; df1 = df2; df2 = tmp; % swap derivatives
d2 = df1'*s;
if d2 > 0 % new slope must be negative
s = -df1; % otherwise use steepest direction
d2 = -s'*s;
end
z1 = z1 * min(RATIO, d1/(d2-realmin)); % slope ratio but max RATIO
d1 = d2;
ls_failed = 0; % this line search did not fail
else
X = X0; f1 = f0; df1 = df0; % restore point from before failed line search
if ls_failed | i > abs(length) % line search failed twice in a row
break; % or we ran out of time, so we give up
end
tmp = df1; df1 = df2; df2 = tmp; % swap derivatives
s = -df1; % try steepest
d1 = -s'*s;
z1 = 1/(1-d1);
ls_failed = 1; % this line search failed
end
if exist('OCTAVE_VERSION')
fflush(stdout);
end
end
fprintf('\n');
【讨论】:
【参考方案5】:fmincg 使用conjugate gradient method
如果您查看此链接中的图片,您会发现 CG 方法的收敛速度比 fminunc 快得多,但它假设了一些我认为 fminunc 方法不需要的约束(BFGS)(共轭向量与非共轭向量)。
换句话说,fmincg 方法比 fminunc 更快但更粗糙,因此它更适用于大型矩阵(许多特征,例如数千个),而不是具有多达数百个特征的较小矩阵。 希望这会有所帮助。
【讨论】:
【参考方案6】:fmincg 比 fminunc 更准确。它们所花费的时间几乎相同。在神经网络中或通常没有更多权重的 fminunc 会给出内存不足错误。因此 fmincg 内存效率更高。
使用 fminunc,准确度为 93.10,所用时间为 11.3794 秒。
Testing lrCostFunction() with regularization
Cost: 2.534819
Expected cost: 2.534819
Gradients:
0.146561
-0.548558
0.724722
1.398003
Expected gradients:
0.146561
-0.548558
0.724722
1.398003
Program paused. Press enter to continue.
Training One-vs-All Logistic Regression...
id = 1512324857357
Elapsed time is 11.3794 seconds.
Program paused. Press enter to continue.
Training Set Accuracy: 93.100000
使用 fmincg,准确度为 95.12,所用时间为 11.7978 秒。
Testing lrCostFunction() with regularization
Cost: 2.534819
Expected cost: 2.534819
Gradients:
0.146561
-0.548558
0.724722
1.398003
Expected gradients:
0.146561
-0.548558
0.724722
1.398003
Program paused. Press enter to continue.
Training One-vs-All Logistic Regression...
id = 1512325280047
Elapsed time is 11.7978 seconds.
Training Set Accuracy: 95.120000
【讨论】:
【参考方案7】:使用 fmincg 优化代价函数。fmincg 的工作方式与 fminunc 类似,但在处理大量参数时效率更高。无论哪种情况,您的成本函数都是相同的,并且您的假设不会更简单或更复杂),但是因为它在对特别复杂的假设进行梯度下降时更有效。
它用于有效利用内存。
【讨论】:
恭喜您获得第一个 SO 答案!为了进一步改进您的答案,您可以链接到一些支持您所描述内容的证据或文档。【参考方案8】:fmincg 是 Coursera 上 course 开发的内部函数,不像 fminunc 是内置的 Octave 函数。由于它们都用于逻辑回归,因此它们仅在一个方面有所不同。当要考虑的参数数量相当大时(如果与训练集的大小相比),fmincg 比 fminunc 工作得更快并且处理更准确。而且,当传递给它的参数没有限制(不受约束)时,首选 fminunc。
【讨论】:
以上是关于八度:逻辑回归:fmincg 和 fminunc 之间的差异的主要内容,如果未能解决你的问题,请参考以下文章
python里面类似于Matlab中的fminunc方法或minfunc方法叫啥名字,怎么调用
是否由MATLAB fmincon报告了Hessian矩阵,fminunc是平均Hessian矩阵?
五线谱高低八度标记 ( 高八度标记 | 标记范围的音符整体提升一个八度 | 低八度标记 | 标记范围的音符整体降低一个八度 )