逻辑回归 - numpy.float64
Posted
技术标签:
【中文标题】逻辑回归 - numpy.float64【英文标题】:Logistic Regression - numpy.float64 【发布时间】:2021-11-11 17:08:09 【问题描述】:我正在尝试使用平方损失函数从头开始创建逻辑回归。但是,我遇到了这个我无法弄清楚的错误。错误是:不能将序列乘以“numpy.float64”类型的非整数。任何帮助都感激不尽! (是的,我知道我对系数很懒惰。)
def logisticReg(data):
X_train = [(d[0],d[1],d[2]) for d, _ in data]
Y_train = [y for _, y in data]
LogReg = LogisticRegression(random_state=42, solver='sag', penalty='none', max_iter=10000, fit_intercept=False)
LogReg.fit(X_train, Y_train)
w=[round(c,2) for c in LogReg.coef_[0]]
sigmoid = lambda y: 1/(1+np.exp(-y))
classify = lambda y: 1 if y > 0.5 else 0
F = lambda W, X: sum([w*x for w,x in zip(W,X)])
for i in range(len(X_train)):
Function = F(w,X_train)
y_pred = sigmoid(Fucntion)
Data_m = (-2) * sum(x*(y-y_pred))
Data_b = (-2) * sum(y - y_pred)
m = m- L*Data_m #update weights
b = b-L*Data_b
weights = zip(m,b)
print(weights)
data = [((1, 0, 0), 1), ((1, 1, 7), 0), ((1, -3, -2), 0), ((1, 8, 9), 1), ((1, 4, 3), 1), ((1, 5, -2), 1), ((1, 0, 0), 1), ((1, 6, 9), 1), ((1, 4, 2), 1), ((1, 1, -9), 1), ((1, -7, 7), 0), ((1, 0, -1), 1), ((1, 9, -4), 1), ((1, 1, 0), 1), ((1, -2, -5), 1), ((1, 2, 3), 1), ((1, -7, 2), 0), ((1, -3, 0), 0), ((1, 5, 0), 1), ((1, 0, -3), 1), ((1, -2, 3), 0), ((1, 9, 6), 1), ((1, 0, -8), 1), ((1, 0, 2), 0), ((1, -8, 6), 0), ((1, 1, 9), 0), ((1, 0, 5), 0), ((1, -4, 9), 0), ((1, 8, 2), 1), ((1, 2, 6), 0)]
logisticReg(data)
【问题讨论】:
每当您报告 Python 错误时,请在问题中包含 complete 错误消息(即完整的回溯)。那里有有用的信息(包括引发异常的行)。 【参考方案1】:我发现错误发生在调用名为F的函数时。
我通过稍微修改您的代码解决了这个问题。
import numpy as np
import sklearn.linear_model as sk
def logisticReg(data):
X_train = [(d[0], d[1], d[2]) for d, _ in data]
Y_train = [y for _, y in data]
LogReg = sk.LogisticRegression(random_state=42, solver='sag', penalty='l2', max_iter=10000, fit_intercept=False)
LogReg.fit(X_train, Y_train)
w=[round(c,2) for c in LogReg.coef_[0]]
sigmoid = lambda y: 1/(1+np.exp(-y))
thresh = lambda y: 1 if y > 0.5 else 0
F = lambda W, X: sum([w*x for w,x in zip(np.array(W), np.array(X))])
for i in range(len(X_train)):
res = F(w, X_train)
y_pred = sigmoid(res)
# Data_m = (-2) * sum(x*(y - y_pred))
# Data_b = (-2) * sum(y - y_pred)
# m = m - L*Data_m
# b = b - L*Data_b
# weights = zip(m,b)
# print(weights)
return(None)
data = [((1, 0, 0), 1), ((1, 1, 7), 0), ((1, -3, -2), 0), ((1, 8, 9), 1), ((1, 4, 3), 1), ((1, 5, -2), 1), ((1, 0, 0), 1), ((1, 6, 9), 1), ((1, 4, 2), 1), ((1, 1, -9), 1), ((1, -7, 7), 0), ((1, 0, -1), 1), ((1, 9, -4), 1), ((1, 1, 0), 1), ((1, -2, -5), 1), ((1, 2, 3), 1), ((1, -7, 2), 0), ((1, -3, 0), 0), ((1, 5, 0), 1), ((1, 0, -3), 1), ((1, -2, 3), 0), ((1, 9, 6), 1), ((1, 0, -8), 1), ((1, 0, 2), 0), ((1, -8, 6), 0), ((1, 1, 9), 0), ((1, 0, 5), 0), ((1, -4, 9), 0), ((1, 8, 2), 1), ((1, 2, 6), 0)]
logisticReg(data)
但是,您应该重新考虑您的算法应该做什么,因为它似乎不清楚:您想从头开始编写逻辑回归算法,但使用 sklearn 中的算法来计算给定数据集的模型,然后您似乎在 for 循环中使用这些系数...
您在逻辑回归函数中应该做的是计算应用于给定数据集的逻辑回归模型的系数,而完全不使用 sklearn。然后你可以在给定的数据集上调用你自己的逻辑回归来获取系数,并将它们与 sklearn 的逻辑回归计算的系数进行比较。
关于逻辑回归模型拟合文档的有用链接:https://en.wikipedia.org/wiki/Logistic_regression#Model_fitting
【讨论】:
以上是关于逻辑回归 - numpy.float64的主要内容,如果未能解决你的问题,请参考以下文章