我们可以使用 Sparktrials 保存 Hyperopt 试验的结果吗
Posted
技术标签:
【中文标题】我们可以使用 Sparktrials 保存 Hyperopt 试验的结果吗【英文标题】:Can we save the result of the Hyperopt Trials with Sparktrials 【发布时间】:2020-08-26 14:29:34 【问题描述】:我目前正在尝试使用库 hyperopt 优化梯度提升方法的超参数。当我在自己的计算机上工作时,我使用了 Trials
类,并且能够使用库泡菜保存和重新加载我的结果。这让我可以保存我测试的所有参数集。我的代码看起来像这样:
from hyperopt import SparkTrials, STATUS_OK, tpe, fmin
from LearningUtils.LearningUtils import build_train_test, get_train_test, mean_error, rmse, mae
from LearningUtils.constants import MAX_EVALS, CV, XGBOOST_OPTIM_SPACE, PARALELISM
from sklearn.model_selection import cross_val_score
import pickle as pkl
if os.path.isdir(PATH_TO_TRIALS): #we reload the past results
with open(PATH_TO_TRIALS, 'rb') as trials_file:
trials = pkl.load(trials_file)
else : # We create the trials file
trials = Trials()
# classic hyperparameters optimization
def objective(space):
regressor = xgb.XGBRegressor(n_estimators = space['n_estimators'],
max_depth = int(space['max_depth']),
learning_rate = space['learning_rate'],
gamma = space['gamma'],
min_child_weight = space['min_child_weight'],
subsample = space['subsample'],
colsample_bytree = space['colsample_bytree'],
verbosity=0
)
regressor.fit(X_train, Y_train)
# Applying k-Fold Cross Validation
accuracies = cross_val_score(estimator=regressor, x=X_train, y=Y_train, cv=5)
CrossValMean = accuracies.mean()
return 'loss':1-CrossValMean, 'status': STATUS_OK
best = fmin(fn=objective,
space=XGBOOST_OPTIM_SPACE,
algo=tpe.suggest,
max_evals=MAX_EVALS,
trials=trials,
return_argmin=False)
# Save the trials
pkl.dump(trials, open(PATH_TO_TRIALS, "wb"))
现在,我想让这段代码在具有更多 CPU 的远程服务器上运行,以实现并行化并获得时间。
我发现我可以使用 hyperopt 的 SparkTrials
类而不是 Trials
来简单地做到这一点。但是,SparkTrials 对象不能与泡菜一起保存。您知道如何保存和重新加载存储在 Sparktrials
对象中的试验结果吗?
【问题讨论】:
【参考方案1】:所以这可能有点晚了,但在搞砸了一点之后,我找到了一种 hacky 解决方案:
spark_trials= SparkTrials()
pickling_trials = dict()
for k, v in spark_trials.__dict__.items():
if not k in ['_spark_context', '_spark']:
pickling_trials[k] = v
pickle.dump(pickling_trials, open('pickling_trials.hyperopt', 'wb'))
SparkTrials 实例的 _spark_context 和 _spark 属性是无法序列化对象的罪魁祸首。事实证明,如果您想重用该对象,则不需要它们,因为如果您想再次重新运行优化,无论如何都会创建一个新的 Spark 上下文,因此您可以将 Trial 重用为:
new_sparktrials = SparkTrials()
for att, v in pickling_trials.items():
setattr(new_sparktrials, att, v)
best = fmin(loss_func,
space=search_space,
algo=tpe.suggest,
max_evals=1000,
trials=new_sparktrials)
瞧 :)
【讨论】:
以上是关于我们可以使用 Sparktrials 保存 Hyperopt 试验的结果吗的主要内容,如果未能解决你的问题,请参考以下文章