Machine Learning Week 1

Posted Barry

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Machine Learning Week 1相关的知识,希望对你有一定的参考价值。

1.Gradient Descent

2.Normal euqation

先介绍下什么是normal equation,如果一个数据集X有m个样本,n个特征。则如果函数为:技术分享图片 。数据集X的特征向量表示为:

技术分享图片
技术分享图片表示第i个训练样本,技术分享图片表示第i个训练样本的第j个特征。之所以在X中加了第一列全为1,是为了让技术分享图片
 
若希望如果函数可以拟合Y,则技术分享图片。又由于 技术分享图片 ,所以可以通过矩阵运算求出參数技术分享图片
熟悉线性代数的同学应该知道怎么求出參数技术分享图片。可是前提是矩阵X存在逆矩阵技术分享图片

 

但仅仅有方阵才有可能存在逆矩阵(不熟悉定理的同学建议去补补线性代数),因此能够通过左乘技术分享图片 使等式变成 技术分享图片,因此技术分享图片,有同学可能会有疑问技术分享图片不一定存在啊,确实是,可是技术分享图片极少不存在,后面会介绍技术分享图片不存在的处理方法,先别着急。如今你仅仅须要明确为什么技术分享图片就能够了。而且记住。

 
介绍完normal equation求解參数技术分享图片,我们已经知道了两种求解參数技术分享图片的方法。normal equation和梯度下降。如今来对照下这两种方法的优缺点以及什么场景选择什么方法。

 

详细见下表吧:

 
技术分享图片
 
 
回到上面说的技术分享图片不一定存在,这样的情况是极少存在的。假设技术分享图片不可逆了,一般要考虑一下两者情况:
(1) 移除冗余特征。一些特征存在线性依赖。
(2) 特征太多时,要删除一些特征。比如(m<n),对于小样本数据使用正则化。
 

Gradient Descent For Linear Regression

Note: [At 6:15 "h(x) = -900 - 0.1x" should be "h(x) = 900 - 0.1x"]

When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived. We can substitute our actual cost function and our actual hypothesis function and modify the equation to :

repeat until convergence:

 {θ0:=θ0?α1mi=1m(hθ(xi)?yi)

θ1:=θ1?α1mi=1m((hθ(xi)?yi)xi)

}

where m is the size of the training set, θ0 a constant that will be changing simultaneously with θ1 and xi,yiare values of the given training set (data).

Note that we have separated out the two cases for θj into separate equations for θ0 and θ1; and that for θ1 we are multiplying xi at the end due to the derivative. The following is a derivation of ??θjJ(θ) for a single example :

技术分享图片

The point of all this is that if we start with a guess for our hypothesis and then repeatedly apply these gradient descent equations, our hypothesis will become more and more accurate.

So, this is simply gradient descent on the original cost function J. This method looks at every example in the entire training set on every step, and is called batch gradient descent. Note that, while gradient descent can be susceptible to local minima in general, the optimization problem we have posed here for linear regression has only one global, and no other local, optima; thus gradient descent always converges (assuming the learning rate α is not too large) to the global minimum. Indeed, J is a convex quadratic function. Here is an example of gradient descent as it is run to minimize a quadratic function.

技术分享图片

The ellipses shown above are the contours of a quadratic function. Also shown is the trajectory taken by gradient descent, which was initialized at (48,30). The x’s in the figure (joined by straight lines) mark the successive values of θ that gradient descent went through as it converged to its minimum.

以上是关于Machine Learning Week 1的主要内容,如果未能解决你的问题,请参考以下文章

Coursera - Machine Learning, Stanford: Week 11

Coursera - Machine Learning, Stanford: Week 10

Machine Learning - week 2 - 编程

神经网络作业: NN LEARNING Coursera Machine Learning(Andrew Ng) WEEK 5

Coursera - Machine Learning, Stanford: Week 1

Machine Learning - week 3