sklearn 中的 RandomForestRegressor 给出负分
Posted
技术标签:
【中文标题】sklearn 中的 RandomForestRegressor 给出负分【英文标题】:RandomForestRegressor in sklearn giving negative scores 【发布时间】:2020-10-08 05:40:57 【问题描述】:我很惊讶我使用 RandomForestRegressor 的预测结果为负,我使用的是默认计分器(确定系数)。任何帮助将不胜感激。 我的数据集看起来像这样。 dataset screenshot here
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.model_selection import cross_val_score,RandomizedSearchCV,train_test_split
import numpy as np,pandas as pd,pickle
dataframe = pd.read_csv("../../notebook/car-sales.csv")
y = dataframe["Price"].str.replace("[\$\.\,]" , "").astype(int)
x = dataframe.drop("Price" , axis = 1)
cat_features = [
"Make",
"Colour",
"Doors",
]
oneencoder = OneHotEncoder()
transformer = ColumnTransformer([
("onehot" ,oneencoder, cat_features)
],remainder="passthrough")
transformered_x = transformer.fit_transform(x)
transformered_x = pd.get_dummies(dataframe[cat_features])
x_train , x_test , y_train,y_test = train_test_split(transformered_x , y , test_size = .2)
regressor = RandomForestRegressor(n_estimators=100)
regressor.fit(x_train , y_train)
regressor.score(x_test , y_test)
【问题讨论】:
我很好奇你为什么用transformered_x = pd.get_dummies(dataframe[cat_features])
覆盖transformered_x = transformer.fit_transform(x)
上的transfomered_x
值?另外我认为您不需要同时使用两者,也许这(***.com/questions/36631163/…)可能有用
请澄清 - 您得到的是负分(如您在标题中所说)还是负预测(如您在正文中所说)?如果是后者,请在这里解释为什么负面预测是一个问题(回归可以给出正面和负面的输出)。
抱歉,@desertnaut 评分为负
@IvanWiryadi 我正在使用 get 假人来测试变压器是否是问题的根源。但假设我从未写过 get_dummies 行
【参考方案1】:
我稍微修改了您的代码,并且能够达到 89% 的分数。 你离得太近了!你做得很好。 不破旧!
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split
import pandas as pd
dataframe = pd.read_csv("car-sales.csv")
df.head()
y = dataframe["Price"].str.replace("[\$\.\,]" , "").astype(int)
x = dataframe.drop("Price", axis=1)
cat_features = ["Make", "Colour", "Odometer", "Doors", ]
oneencoder = OneHotEncoder()
transformer = ColumnTransformer([("onehot", oneencoder, cat_features)], remainder="passthrough")
transformered_x = transformer.fit_transform(x)
transformered_x = pd.get_dummies(dataframe[cat_features])
x_train, x_test, y_train, y_test = train_test_split(transformered_x, y, test_size=.2, random_state=3)
forest = RandomForestRegressor(n_estimators=200, criterion="mse", min_samples_leaf=3, min_samples_split=3, max_depth=10)
forest.fit(x_train, y_train)
# Explained variance score: 1 is perfect prediction
print('Score: %.2f' % forest.score(x_test, y_test, sample_weight=None))
print(forest.score(x_test, y_test))
我认为由于数据量极少导致过度拟合,结果是负面的。
这直接来自 sklearn 文档:
我引用文件:
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of
squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares
((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it
can be negative (because the model can be arbitrarily worse). A constant model
that always predicts the expected value of y, disregarding the input features,
would get a R^2 score of 0.0.
我将数据集扩大到 100 行,删除了代理键(第一列的 int id 为 0-99),这里是:
【讨论】:
以上是关于sklearn 中的 RandomForestRegressor 给出负分的主要内容,如果未能解决你的问题,请参考以下文章
SkLearn 中的 TimeSeriesSplit 无法正常工作
为什么sklearn中的sklearn.metrics.RocCurveDisplay可视化的图像中的AUC值的有效小数位数为两位?而不能自定义调节(floating point precision)