推荐算法实战
Posted SugarLover
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了推荐算法实战相关的知识,希望对你有一定的参考价值。
Author: shikanon
阿里妈妈搜索广告转化预测算法比赛,因为看到可以发IJCAI,所以打算下载数据来玩玩。
背景介绍
本次比赛以阿里电商广告为研究对象,提供了淘宝平台的海量真实交易数据,参赛选手通过人工智能技术构建预测模型预估用户的购买意向,即给定广告点击相关的用户(user)、广告商品(ad)、检索词(query)、上下文内容(context)、商店(shop)等信息的条件下预测广告产生购买行为的概率(pCVR),形式化定义为:pCVR=P(conversion=1 | query, user, ad, context, shop)。
字段说明
字段 | 解释 |
---|---|
instance_id | 样本编号,Long |
is_trade | 是否交易的标记位,Int类型;取值是0或者1,其中1 表示这条样本最终产生交易,0 表示没有交易 |
item_id | 广告商品编号,Long类型 |
item_category_list | 广告商品的的类目列表,String类型;从根类目(最粗略的一级类目)向叶子类目(最精细的类目)依次排列,数据拼接格式为 "category_0;category_1;category_2",其中 category_1 是 category_0 的子类目,category_2 是 category_1 的子类目 |
item_property_list | 广告商品的属性列表,String类型;数据拼接格式为 "property_0; |
item_brand_id | 广告商品的品牌编号,Long类型 |
item_city_id | 广告商品的城市编号,Long类型 |
item_price_level | 广告商品的价格等级,Int类型;取值从0开始,数值越大表示价格越高 |
item_sales_level | 广告商品的销量等级,Int类型;取值从0开始,数值越大表示销量越大 |
item_collected_level | 广告商品被收藏次数的等级,Int类型;取值从0开始,数值越大表示被收藏次数越大 |
item_pv_level | 广告商品被展示次数的等级,Int类型;取值从0开始,数值越大表示被展示次数越大 |
user_id | 用户的编号,Long类型 |
user_gender_id | 用户的预测性别编号,Int类型;0表示女性用户,1表示男性用户,2表示家庭用户 |
user_age_level | 用户的预测年龄等级,Int类型;数值越大表示年龄越大 |
user_occupation_id | 用户的预测职业编号,Int类型 |
user_star_level | 用户的星级编号,Int类型;数值越大表示用户的星级越高 |
context_id | 上下文信息的编号,Long类型 |
context_timestamp | 广告商品的展示时间,Long类型;取值是以秒为单位的Unix时间戳,以1天为单位对时间戳进行了偏移 |
context_page_id | 广告商品的展示页面编号,Int类型;取值从1开始,依次增加;在一次搜索的展示结果中第一屏的编号为1,第二屏的编号为2 |
predict_category_property | 根据查询词预测的类目属性列表,String类型;数据拼接格式为 “category_A:property_A_1,property_A_2,property_A_3;category_B:-1;category_C:property_C_1,property_C_2” ,其中 category_A、category_B、category_C 是预测的三个类目;property_B 取值为-1,表示预测的第二个类目 category_B 没有对应的预测属性 |
shop_id | 店铺的编号,Long类型 |
shop_review_num_level | 店铺的评价数量等级,Int类型;取值从0开始,数值越大表示评价数量越多 |
shop_review_positive_rate | 店铺的好评率,Double类型;取值在0到1之间,数值越大表示好评率越高 |
shop_star_level | 店铺的星级编号,Int类型;取值从0开始,数值越大表示店铺的星级越高 |
shop_score_service | 店铺的服务态度评分,Double类型;取值在0到1之间,数值越大表示评分越高 |
shop_score_delivery | 店铺的物流服务评分,Double类型;取值在0到1之间,数值越大表示评分越高 |
shop_score_description | 店铺的描述相符评分,Double类型;取值在0到1之间,数值越大表示评分越高 |
2.字段向量化
变量分类
# 单变量 single_var = ['user_occupation_id', 'user_gender_id', 'item_city_id', 'item_brand_id', ] # 多变量 multi_var = ['item_category_list', 'item_property_list'] # 递增变量 rank_var = ['shop_star_level', 'context_page_id', 'user_star_level', 'user_age_level', 'item_pv_level', 'item_collected_level', 'item_sales_level', 'item_price_level','shop_review_num_level'] # 标准变量 standard_var = ['shop_score_description', 'shop_score_delivery', 'shop_score_service', 'shop_review_positive_rate', ] # 时间变量 datetime_var = ['context_timestamp'] unkown_var = ['predict_category_property'] # 预测变量 pred_var = ['is_trade']
2.1 适合one-hot处理的字段
对于类型变量,可以用one-hot形式进行处理。
from sklearn.preprocessing import LabelEncoder lbl = LabelEncoder() lbl_data = lbl.fit_transform(train_data['user_occupation_id'].values)
2.2 多等级类别向量化
对于多等级类别,可以采用二叉树编码,具体做法如下:
第一父类: x个; 二级父类:y个; k级父类:m个。
【001000...0】【00...1000】....【0001000】 |----x个-------| |-----y------|....|-----m个----|
每个类别对应的位置采用one-hot填充,实现代码如下:
max_category_num = train_data['item_category_list'].apply(lambda x: len(x.split(';'))).max() features = list() for n in range(max_category_num): features.append(train_data['item_category_list'].apply(lambda x: x.split(';')[n] if len(x.split(';'))>n else pd.np.nan)) merge_features = pd.concat(features, axis=1)
2.3 多属性向量化
对于多属性向量化,由于各属性之间并没有等级关系,我们可以收集全部属性然后采用多label叠加的形式进行向量化。
例如总共有x个类别,每个样本可以有多个类别,比如样本一包含属性1,属性3和属性4,则表示为: [1011....0] |---x个-------|
代码实现如下:
from sklearn.preprocessing import MultiLabelBinarizer multi_label_col = train_data['item_property_list'].apply(lambda x: x.split(';')) multi_label_classes = set(itertools.chain.from_iterable(multi_label_col.tolist())) mlb = MultiLabelBinarizer(classes=list(multi_label_classes), sparse_output=True) # sparse_output等于True才能计算,不然很容易爆内存,将其输出CSR格式存储 mlb.fit(multi_label_col) y_indicator = mlb.transform(multi_label_col) #返回为CSR matrix
另一种是比较复杂的,比如predict_category_property
字段:
import re def split_method(string, symbol=';|:'): return set(re.split(symbol,string))
2.5 时间序列处理
这里对时间序列的处理需要注意,因为原始数据已经做了按天的平移处理,因此,原来的年月日信息被打乱,可以使用的信息包括小时或分,当然,我们也能将半小时或者15分钟划分成一个单元,记录购买喜好。
# 将int转换成时间格式 train_data['context_timestamp'] = pd.to_datetime(train_data['context_timestamp'], unit='s') context_hours = train_data['context_timestamp'].dt.hour context_minutes = train_data['context_timestamp'].dt.minute context_minutes = context_minutes + context_minutes*60 # 将时间周期化 def cricle(x): return np.sin(np.pi*x/(x.max()+1)) train['minute'] = cricle(context_minutes)
时间序列处理里面有个比较重要的就是浏览顺序
我们先把时间做顺序化处理:
all_data['time_series'] = (all_data['context_timestamp'].dt.hour + all_data['context_timestamp'].dt.day*24)*60+\ all_data['context_timestamp'].dt.minute
time_series
表示时间大小
new_time_col = all_data.groupby('user_id').max().reset_index()[['user_id','time_series']] new_time_col.columns = ['user_id', 'time_series_first'] all_data = pd.merge(all_data, new_time_col, on='user_id', how='left') all_data['custom_field3'] = all_data['time_series']/all_data['time_series_first'] #构建二级品牌购买关系 all_data['item_category_two_level'] = all_data['item_category_list'].apply(lambda x: x.split(';')[1]) new_time_col = all_data.groupby(['user_id', 'item_category_two_level']).max().reset_index()[['user_id', 'item_category_two_level','time_series']] new_time_col.columns = ['user_id', 'item_category_two_level', 'time_series_sec'] all_data = pd.merge(all_data, new_time_col, on=['user_id', 'item_category_two_level'], how='left') all_data['time_series_sec'] all_data['custom_field4'] = all_data['time_series']/all_data['time_series_sec']
2.6 构建浏览频率特征
构建浏览的频次特征,比如个人浏览次数、商品浏览次数。
new_col = all_data.groupby('user_id').count().reset_index()[['user_id','instance_id']] new_col.columns = ['user_id', 'custom_field1'] all_data = pd.merge(all_data, new_col, on='user_id', how='left') new_col = all_data.groupby('item_id').count().reset_index()[['item_id','instance_id']] new_col.columns = ['item_id', 'custom_field2'] all_data = pd.merge(all_data, new_col, on='item_id', how='left')
2.7 使用管道做特征处理
为了使得处理过程标准化,可以采用scikit-learn的Pipline进行处理。
将上面几种不同数据处理格式定义成Transformer类
import itertools import sklearn from sklearn.preprocessing import MinMaxScaler, LabelEncoder, MultiLabelBinarizer class select_vals(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin): '''选择需要字段''' def __init__(self, cols): self.cols = cols def fit(self, x): return self def transform(self, x): x = x[self.cols].values return x class normalization(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin): '''归一化处理''' def __init__(self, cols): self.cols = cols self.encoder = MinMaxScaler() def fit(self, x): return self def transform(self, x): x = self.encoder.fit_transform(x[self.cols]) return x class label_encoder(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin): '''label_encoder编码''' def __init__(self, cols): self.cols = cols self.encoder = LabelEncoder() def fit(self, x): return self def transform(self, x): x = np.array([self.encoder.fit_transform(x[col].values) for col in self.cols]) return x.T class multilevel_encoder(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin): '''多等级编码器''' def __init__(self, col): self.col = col def fit(self, x): return self def transform(self, x): max_category_num = x[self.col].apply(lambda x: len(x.split(';'))).max() features = list() for n in range(max_category_num): features.append(x[self.col].apply(lambda x: x.split(';')[n] if len(x.split(';'))>n else '')) merge_features = pd.concat(features, axis=1) merge_features.columns = ['multilevel_'+str(i) for i in range(len(features))] lblencoder = LabelEncoder() x = np.array([lblencoder.fit_transform(merge_features[col].values) for col in merge_features.columns]) return x.T class multi_label_encoder(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin): '''多类别编码,由于维度过大,采用CSR形式存储''' def __init__(self, col): self.col = col def fit(self, x): return self def transform(self, x): multi_label_col = x[self.col].apply(lambda x: x.split(';')) multi_label_classes = set(itertools.chain.from_iterable(multi_label_col.tolist())) self.encoder = MultiLabelBinarizer(classes=list(multi_label_classes), sparse_output=True) mlb = MultiLabelBinarizer(classes=list(multi_label_classes), sparse_output=True) # sparse_output等于True才能计算,不然很容易爆内存,将其输出CSR格式存储 return self.encoder.fit_transform(multi_label_col) #返回为CSR matrix class special_encoder(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin): '''主要用于处理predict_category_property字段''' def __init__(self, col): self.col = col def fit(self, x): return self def transform(self, x): multi_label_col = x[self.col].apply(lambda s: split_method(s, ';|:')) multi_label_classes = set(itertools.chain.from_iterable(multi_label_col.tolist())) self.encoder = MultiLabelBinarizer(classes=list(multi_label_classes), sparse_output=True) mlb = MultiLabelBinarizer(classes=list(multi_label_classes), sparse_output=True) # sparse_output等于True才能计算,不然很容易爆内存,将其输出CSR格式存储 return self.encoder.fit_transform(multi_label_col) #返回为CSR matrix class timer_encoder(sklearn.base.BaseEstimator, sklearn.base.TransformerMixin): '''处理时间字段''' def __init__(self, col): self.col = col def fit(self, x): return self def transform(self, x): context_hours = x[self.col].dt.hour context_minutes = x[self.col].dt.minute context_minutes = context_minutes + context_minutes*60 result = np.sin(np.pi*context_minutes/(context_minutes.max()+1)) return result.as_matrix().reshape(-1,1)
放入管道中:
from sklearn import pipeline from sklearn.decomposition import TruncatedSVD # 单变量 single_var = ['user_occupation_id', 'user_gender_id', 'item_city_id', 'item_brand_id', ] # 多变量 multi_var = ['item_category_list', 'item_property_list'] # 递增变量 rank_var = ['shop_star_level', 'context_page_id', 'user_star_level', 'user_age_level', 'item_pv_level', 'item_collected_level', 'item_sales_level', 'item_price_level'] # 标准变量 standard_var = ['shop_score_description', 'shop_score_delivery', 'shop_score_service', 'shop_review_positive_rate', 'shop_review_num_level'] # 自定义特征字段 custom_var = ['custom_field1', 'custom_field2'] # 非0处理 for col in standard_var: all_data.set_value(all_data[all_data[col]<0].index, col, 0) ppln = pipeline.Pipeline([ ('union', pipeline.FeatureUnion( n_jobs = -1, transformer_list = [('origin', select_vals(standard_var)), ('normalization', normalization(rank_var)), ('custom', normalization(custom_var)), ('time_dealing', timer_encoder('context_timestamp')), ('one_label_encoder', pipeline.Pipeline([('encoder',label_encoder(single_var)),('one_hot',OneHotEncoder())])), ('multilevel_encoder', pipeline.Pipeline([('encoder',multilevel_encoder('item_category_list')), ('one_hot',OneHotEncoder()) ])), ('multilable_encoder', multi_label_encoder('item_property_list')), ('predict_category_property', special_encoder('predict_category_property')), ])) ]) ppln.fit_transform(train_data)
2.5 存储训练特征
# save def save_sparse_csr(filename, array): np.savez(filename, data=array.data, indices=array.indices, indptr=array.indptr, shape=array.shape) # load def load_sparse_csr(filename): loader = np.load(filename) return csr_matrix((loader['data'], loader['indices'], loader['indptr']), shape=loader['shape']) ## for example save_sparse_csr('train_features_savez.csr', train_x)
3. 模型
3.1 传统的分类方法
traditional Gradient Boosting Decision Tree
# train model import lightgbm as lgb gbdt = lgb.LGBMClassifier(objective='binary', num_leaves=64, learning_rate=0.01, n_estimators=2000, colsample_bytree = 0.65, subsample = 0.75, ) gbdt.fit(train_x, train_y, eval_set=[(valid_x, valid_y)], eval_metric='logloss', early_stopping_rounds=200)
Dropouts meet Multiple Additive Regression Trees
dart = lgb.LGBMClassifier(boosting_type='dart', objective='binary', num_leaves=64, learning_rate=0.02, n_estimators=3000, colsample_bytree = 0.65, subsample = 0.75, ) dart.fit(train_x, train_y, eval_set=[(valid_x, valid_y)], eval_metric='logloss', early_stopping_rounds=200)
3.2 Factorization-machine
Factorization-machine
FM 模型可以看成是线性部分的 LR,还有非线性的特征组合 xixj 交叉的组合。
3.3 Embdedding Model
3.3 Ensemble Model
stack 模型实现
class StackingAveragedModels(): def __init__(self, base_models, meta_model, n_folds=15): self.base_models = base_models self.meta_model = meta_model self.n_folds = n_folds # We again fit the data on clones of the original models def fit(self, X, y): self.base_models_ = [list() for x in self.base_models] self.meta_model_ = clone(self.meta_model) kfold = KFold(n_splits=self.n_folds, shuffle=True) # Train cloned base models then create out-of-fold predictions # that are needed to train the cloned meta-model out_of_fold_predictions = np.zeros((X.shape[0], len(self.base_models))) for i, model in enumerate(self.base_models): for train_index, holdout_index in kfold.split(X, y): instance = clone(model) self.base_models_[i].append(instance) try: instance.fit(X[train_index], y[train_index], verbose=False) except: instance.fit(X[train_index], y[train_index]) y_pred = instance.predict(X[holdout_index]) out_of_fold_predictions[holdout_index, i] = y_pred # Now train the cloned meta-model using the out-of-fold predictions as new feature self.meta_model_.fit(out_of_fold_predictions, y) return self #Do the predictions of all base models on the test data and use the averaged predictions as #meta-features for the final prediction which is done by the meta-model def predict(self, X): meta_features = np.column_stack([ np.column_stack([model.predict(X) for model in base_models]).mean(axis=1) for base_models in self.base_models_ ]) return self.meta_model_.predict(meta_features)
以上是关于推荐算法实战的主要内容,如果未能解决你的问题,请参考以下文章