python 机器学习 一元和二元多项式回归 梯度下降算法
Posted 哈伦裤DOCTOR
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python 机器学习 一元和二元多项式回归 梯度下降算法相关的知识,希望对你有一定的参考价值。
python 机器学习 一元和二元多项式回归
一元多项式
一元多项式表达式为:
Y
=
W
T
X
=
[
w
0
+
w
1
+
⋯
+
w
n
]
⋅
[
1
+
x
+
⋯
+
x
n
−
1
]
T
Y=W^TX=\\left[ w_0+w_1+\\cdots +w_n \\right] \\cdot \\left[ 1+x+\\cdots +x^{n-1} \\right] ^T
Y=WTX=[w0+w1+⋯+wn]⋅[1+x+⋯+xn−1]T
其中高次项为一次项的高次幂,将该式写为多元表达式:
Y
=
W
T
X
=
[
w
0
+
w
1
+
⋯
+
w
n
]
⋅
[
x
1
+
x
2
+
⋯
+
x
n
]
T
Y=W^TX=\\left[ w_0+w_1+\\cdots +w_n \\right] \\cdot \\left[ x_1+x_2+\\cdots +x_n \\right] ^T
Y=WTX=[w0+w1+⋯+wn]⋅[x1+x2+⋯+xn]T
其中,xn=x^(n+1),n=0,1,2…,n-1
这里使用梯度下降算法拟合一元二次多项式方程:
假设其函数(X,Y)的函数映射关系为:
h
θ
(
x
)
=
θ
0
+
θ
0
×
x
+
θ
0
×
x
2
h_{\\theta}\\left( x \\right) =\\theta _0+\\theta _0\\times x+\\theta _0\\times x^2
hθ(x)=θ0+θ0×x+θ0×x2
损失函数选择均平方误差MSE:
J
(
θ
)
=
1
2
m
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
2
J\\left( \\theta \\right) =\\frac{1}{2m}\\sum_{i=1}^m{\\left( h_{\\theta}\\left( x^{\\left( i \\right)} \\right) -y^{\\left( i \\right)} \\right)}^2
J(θ)=2m1i=1∑m(hθ(x(i))−y(i))2
参数θ关于J(θ)的梯度为:
∂
J
∂
θ
j
=
1
m
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
x
j
(
i
)
\\frac{\\partial J}{\\partial \\theta _j}=\\frac{1}{m}\\sum_{i=1}^m{\\left( h_{\\theta}\\left( x^{\\left( i \\right)} \\right) -y^{\\left( i \\right)} \\right)}{x_j}^{\\left( i \\right)}
∂θj∂J=m1i=1∑m(hθ(x(i))−y(i))xj(i)
所以其参数更新公式为:
θ
j
=
θ
j
−
α
∂
J
∂
θ
j
=
θ
j
−
α
m
∑
i
=
1
m
(
h
θ
(
x
(
i
)
)
−
y
(
i
)
)
x
j
(
i
)
\\theta _j=\\theta _j-\\alpha \\frac{\\partial J}{\\partial \\theta _j}=\\theta _j-\\frac{\\alpha}{m}\\sum_{i=1}^m{\\left( h_{\\theta}\\left( x^{\\left( i \\right)} \\right) -y^{\\left( i \\right)} \\right)}{x_j}^{\\left( i \\right)}
θj=θj−α∂θj∂J=θj−mαi=1∑m(hθ(x(i))−y(i))xj(i)
α为学习率
生成数据
待拟合函数为:
y
=
2
+
3
×
x
+
2
×
x
2
y=2+3\\times x+2\\times x^2
y=2+3×x+2×x2
使用numpy.random.normal()函数为数据添加噪声,(高斯噪音):
y_noise=np.random.normal(loc=0,scale=1,size=len(x))
下图为生成数据散点图:
具体代码为:
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(-2,2,0.2)
def Y():
return 2+3*x+2*x**2 #待拟合函数
y=Y()
##噪音
# x_noise=np.random.normal(loc=0,scale=0,size=len(x)) #可为x添加随机扰动
y_noise=np.random.normal(loc=0,scale=1,size=len(x))
#x=x+x_noise
y=y+y_noise
x_train=np.stack((np.linspace(1,1,len(x)),x,x**2),axis=1) #使用np.stack(将X0,X1,X2)合成待训练数据
y_train=y
plt.scatter(x,y_train)
plt.show()
x_train的生成原理:
x
_
t
r
a
i
n
=
[
x
0
x
1
x
2
]
=
[
1
x
x
2
]
x\\_train=[x_0\\ x_1\\ x_2]=[1\\ x\\ x^2]
x_train=[x0 x1 x2]=[1 x x2]
最后使用梯度下降方式实现PYTHON参数更新代码为:
import matplotlib.pyplot as plt
import numpy as np
x=np.arange(-2,2,0.2)
def Y():
return 2+3*x+2*x**2
y=Y()
x_noise=np.random.normal(loc=0,scale=0,size=len(x))
y_noise=np.random.normal(loc=0,scale=1,size=len(x))
x=x+x_noise
y=y+y_noise
x_train=np.stack((np.linspace(1,1,len(x)),x,x**2),axis=1)
y_train=y
plt.scatter(x,y_train)
m=len(x_train)
theat=np.array([0,0,0])
lr=0.009
def Y_pred(x,a):
return a[0]*x[0]+a[1]*x[1]+a[2]*x[2]
def partial_theat(x,y,a):
cost_all=np.array([0,0,0])
for i in range(m):
cost_all=cost_all+(Y_pred(x[i],a)-y[i])*x[i]
return 1.0/m*cost_all
def J(x,y,a):
cost=0
for i in range(m):
cost=cost+(Y_pred(x[i],a)-y[i])**2
return (1/2*m)*cost
iterations=0
theat_list=np.array([0,0,0])
while(True):
# plt.scatter(x_train,y_train)
# plt.plot(np.arange(-3,3,0.1),theat[0]*np.arange(-3,3,0.1)+theat[1])
theat=theat-lr*partial_theat(x_train,y_train,theat)
theat_list=np.vstack((theat_list,theat))
iterations=iterations+1
if(np.abs(J(x_train,y_train,theat_list[-1])-J(x_train,y_train,theat_list[-2]))<0.001):
break
print(theat_list[-1],theat_list.shape)
x_t=np.linspace(-2,2,20)
x_test=np.stack((np.linspace(1,1,20),x_t,x_t**2),axis=1)
##plt.plot(x,theat[0]+x_t*theat[1]+x_t**2*theat[2])
from matplotlib.animation import FuncAnimation
fig,ax=plt.subplots()
atext_anti=plt.text(0.2,2,'',fontsize=15)
btext_anti=plt.text(1.5,2,'',fontsize=15)
ctext_anti=plt.text(3,2,'',fontsize=15)
ln,=plt.plot([],[],'red')
def init():
ax.set_xlim(np.min(x_train),np.max(x_train))
ax.set_ylim(np.min(y_train),np.max(y_train))
return ln,
def upgrad(frame):
x=x_t
y=frame[0]+frame[1]*x+frame[2]*x**2
ln.set_data(x,y)
atext_anti.set_text('a=%.3f'%frame[0])
btext_anti.set_text('b=%.3f'%frame[1])
ctext_anti.set_text('c=%.3f'%frame[2])
return ln,
ax.scatter(x,y_train)
ani=FuncAnimation(fig,upgrad,frames=theat_list,init_func=init)
plt.show()
梯度下降算法拟合过程动图如下:
最后拟合结果为:
y
=
1.86
+
3.07
×
x
+
1.92
×
x
2
y=1.86+3.07\\times x+1.92\\times x^2
y=1.86机器学习线性回归(最小二乘法/梯度下降法)多项式回归logistic回归softmax回归