使用 statsmodels 进行 Holt-Winters 时间序列预测
Posted
技术标签:
【中文标题】使用 statsmodels 进行 Holt-Winters 时间序列预测【英文标题】:Holt-Winters time series forecasting with statsmodels 【发布时间】:2018-11-19 23:32:31 【问题描述】:我尝试使用holt-winters model
进行预测,如下所示,但我不断得到与我的预期不一致的预测。我还展示了情节的可视化
Train = Airline[:130]
Test = Airline[129:]
from statsmodels.tsa.holtwinters import Holt
y_hat_avg = Test.copy()
fit1 = Holt(np.asarray(Train['Passengers'])).fit()
y_hat_avg['Holt_Winter'] = fit1.predict(start=1,end=15)
plt.figure(figsize=(16,8))
plt.plot(Train.index, Train['Passengers'], label='Train')
plt.plot(Test.index,Test['Passengers'], label='Test')
plt.plot(y_hat_avg.index,y_hat_avg['Holt_Winter'], label='Holt_Winter')
plt.legend(loc='best')
plt.savefig('Holt_Winters.jpg')
我不确定我在这里缺少什么。
预测似乎适合训练数据的早期部分
【问题讨论】:
你可以在这里发布时间序列数据吗? 数据可以在这里找到datamarket.com/data/set/22u3/…点击导出。我对数据进行了一些预处理,并将月份列转换为索引。 我猜你的索引 start=1,end=15 是错误的。在图中,预测看起来像是针对前几次观察。尝试使用 start=129 或 start=130 进行预测。 【参考方案1】:错误的主要原因是您的起始值和结束值。它预测第一次观察的值,直到第十五次。但是,即使您更正了这一点,Holt 也仅包含趋势部分,您的预测不会产生季节性影响。而是将ExponentialSmoothing
与季节性参数一起使用。
这是您的数据集的一个工作示例:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.tsa.holtwinters import ExponentialSmoothing
df = pd.read_csv('/home/ayhan/international-airline-passengers.csv',
parse_dates=['Month'],
index_col='Month'
)
df.index.freq = 'MS'
train, test = df.iloc[:130, 0], df.iloc[130:, 0]
model = ExponentialSmoothing(train, seasonal='mul', seasonal_periods=12).fit()
pred = model.predict(start=test.index[0], end=test.index[-1])
plt.plot(train.index, train, label='Train')
plt.plot(test.index, test, label='Test')
plt.plot(pred.index, pred, label='Holt-Winters')
plt.legend(loc='best')
产生以下情节:
【讨论】:
嘿,你能告诉我df.index.freq='MS'
在你的代码中的作用吗?
为了建立一个平滑模型 statsmodels 需要知道你的数据的频率(无论是每天、每月还是等等)。 MS 表示月初,所以我们说它是我们在每个月初观察到的月度数据。
感谢您的回复。我的数据点滞后 5 分钟。那么,我的数据频率应该是多少?有什么想法吗?
我觉得应该是'5T'
@ayhan 如果我有一个月的数据并且频率是“10T”并且每天都有一个趋势,即(24 小时)我的seasonal
和seasonal_periods
参数应该是什么?我应该使用season be 'mul' 和seasonal_periods=1 一个月吗?【参考方案2】:
这是对上述答案的即兴创作 https://***.com/users/2285236/ayhan
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from statsmodels.tsa.holtwinters import ExponentialSmoothing
from sklearn.metrics import mean_squared_error
from math import sqrt
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 7
df = pd.read_csv('D:/WORK/international-airline-passengers.csv',
parse_dates=['Month'],
index_col='Month'
)
df.index.freq = 'MS'
train, test = df.iloc[:132, 0], df.iloc[132:, 0]
# model = ExponentialSmoothing(train, seasonal='mul', seasonal_periods=12).fit()
model = ExponentialSmoothing(train, trend='add', seasonal='add', seasonal_periods=12, damped=True)
hw_model = model.fit(optimized=True, use_boxcox=False, remove_bias=False)
pred = hw_model.predict(start=test.index[0], end=test.index[-1])
plt.plot(train.index, train, label='Train')
plt.plot(test.index, test, label='Test')
plt.plot(pred.index, pred, label='Holt-Winters')
plt.legend(loc='best');
这是我获得最佳参数的方法
def exp_smoothing_configs(seasonal=[None]):
models = list()
# define config lists
t_params = ['add', 'mul', None]
d_params = [True, False]
s_params = ['add', 'mul', None]
p_params = seasonal
b_params = [True, False]
r_params = [True, False]
# create config instances
for t in t_params:
for d in d_params:
for s in s_params:
for p in p_params:
for b in b_params:
for r in r_params:
cfg = [t,d,s,p,b,r]
models.append(cfg)
return models
cfg_list = exp_smoothing_configs(seasonal=[12]) #[0,6,12]
edf = df['Passengers']
ts = edf[:'1959-12-01'].copy()
ts_v = edf['1960-01-01':].copy()
ind = edf.index[-12:] # this will select last 12 months' indexes
print("Holt's Winter Model")
best_RMSE = np.inf
best_config = []
t1 = d1 = s1 = p1 = b1 = r1 = ''
for j in range(len(cfg_list)):
print(j)
try:
cg = cfg_list[j]
print(cg)
t,d,s,p,b,r = cg
train = edf[:'1959'].copy()
test = edf['1960-01-01':'1960-12-01'].copy()
# define model
if (t == None):
model = ExponentialSmoothing(ts, trend=t, seasonal=s, seasonal_periods=p)
else:
model = ExponentialSmoothing(ts, trend=t, damped=d, seasonal=s, seasonal_periods=p)
# fit model
model_fit = model.fit(optimized=True, use_boxcox=b, remove_bias=r)
# make one step forecast
y_forecast = model_fit.forecast(12)
rmse = np.sqrt(mean_squared_error(ts_v,y_forecast))
print(rmse)
if rmse < best_RMSE:
best_RMSE = rmse
best_config = cfg_list[j]
except:
continue
评估模型的功能
def model_eval(y, predictions):
# Import library for metrics
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
# Mean absolute error (MAE)
mae = mean_absolute_error(y, predictions)
# Mean squared error (MSE)
mse = mean_squared_error(y, predictions)
# SMAPE is an alternative for MAPE when there are zeros in the testing data. It
# scales the absolute percentage by the sum of forecast and observed values
SMAPE = np.mean(np.abs((y - predictions) / ((y + predictions)/2))) * 100
# Calculate the Root Mean Squared Error
rmse = np.sqrt(mean_squared_error(y, predictions))
# Calculate the Mean Absolute Percentage Error
# y, predictions = check_array(y, predictions)
MAPE = np.mean(np.abs((y - predictions) / y)) * 100
# mean_forecast_error
mfe = np.mean(y - predictions)
# NMSE normalizes the obtained MSE after dividing it by the test variance. It
# is a balanced error measure and is very effective in judging forecast
# accuracy of a model.
# normalised_mean_squared_error
NMSE = mse / (np.sum((y - np.mean(y)) ** 2)/(len(y)-1))
# theil_u_statistic
# It is a normalized measure of total forecast error.
error = y - predictions
mfe = np.sqrt(np.mean(predictions**2))
mse = np.sqrt(np.mean(y**2))
rmse = np.sqrt(np.mean(error**2))
theil_u_statistic = rmse / (mfe*mse)
# mean_absolute_scaled_error
# This evaluation metric is used to over come some of the problems of MAPE and
# is used to measure if the forecasting model is better than the naive model or
# not.
# Print metrics
print('Mean Absolute Error:', round(mae, 3))
print('Mean Squared Error:', round(mse, 3))
print('Root Mean Squared Error:', round(rmse, 3))
print('Mean absolute percentage error:', round(MAPE, 3))
print('Scaled Mean absolute percentage error:', round(SMAPE, 3))
print('Mean forecast error:', round(mfe, 3))
print('Normalised mean squared error:', round(NMSE, 3))
print('Theil_u_statistic:', round(theil_u_statistic, 3))
print(best_RMSE, best_config)
t1,d1,s1,p1,b1,r1 = best_config
if t1 == None:
hw_model1 = ExponentialSmoothing(ts, trend=t1, seasonal=s1, seasonal_periods=p1)
else:
hw_model1 = ExponentialSmoothing(ts, trend=t1, seasonal=s1, seasonal_periods=p1, damped=d1)
fit2 = hw_model1.fit(optimized=True, use_boxcox=b1, remove_bias=r1)
pred_HW = fit2.predict(start=pd.to_datetime('1960-01-01'), end = pd.to_datetime('1960-12-01'))
# pred_HW = fit2.forecast(12)
pred_HW = pd.Series(data=pred_HW, index=ind)
df_pass_pred = pd.concat([df, pred_HW.rename('pred_HW')], axis=1)
print(model_eval(ts_v, pred_HW))
print('-*-'*20)
# 15.570830579664698 ['add', True, 'add', 12, False, False]
# Mean Absolute Error: 10.456
# Mean Squared Error: 481.948
# Root Mean Squared Error: 15.571
# Mean absolute percentage error: 2.317
# Scaled Mean absolute percentage error: 2.273
# Mean forecast error: 483.689
# Normalised mean squared error: 0.04
# Theil_u_statistic: 0.0
# None
# -*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*--*-
总结:
新模型结果:
Mean Absolute Error: 10.456
Mean Squared Error: 481.948
Root Mean Squared Error: 15.571
Mean absolute percentage error: 2.317
Scaled Mean absolute percentage error: 2.273
Mean forecast error: 483.689
Normalised mean squared error: 0.04
Theil_u_statistic: 0.0
旧模型结果:
Mean Absolute Error: 20.682
Mean Squared Error: 481.948
Root Mean Squared Error: 23.719
Mean absolute percentage error: 4.468
Scaled Mean absolute percentage error: 4.56
Mean forecast error: 466.704
Normalised mean squared error: 0.093
Theil_u_statistic: 0.0
奖励:
您将获得这个不错的数据框,您可以在其中将原始值与预测值进行比较。
df_pass_pred['1960':]
输出
Passengers pred_HW
Month
1960-01-01 417 417.826543
1960-02-01 391 400.452916
1960-03-01 419 461.804259
1960-04-01 461 450.787208
1960-05-01 472 472.695903
1960-06-01 535 528.560823
1960-07-01 622 601.265794
1960-08-01 606 608.370401
1960-09-01 508 508.869849
1960-10-01 461 452.958727
1960-11-01 390 407.634391
1960-12-01 432 437.385058
【讨论】:
以上是关于使用 statsmodels 进行 Holt-Winters 时间序列预测的主要内容,如果未能解决你的问题,请参考以下文章
为啥当我使用 statsmodels 进行 OLS 和使用 scikit 进行 PooledOLS 时得到相同的结果?
使用 statsmodels 进行 Holt-Winters 时间序列预测