Parameters: |
- params (dict) – Parameters for training.
- train_set (Dataset) – Data to be trained.
- num_boost_round (int, optional (default=100)) – Number of boosting iterations.
- valid_sets (list of Datasets or None, optional (default=None)) – List of data to be evaluated during training.
- valid_names (list of string or None, optional (default=None)) – Names of
valid_sets .
- fobj (callable or None, optional (default=None)) – Customized objective function.
- feval (callable or None, optional (default=None)) – Customized evaluation function. Should accept two parameters: preds, train_data. For multi-class task, the preds is group by class_id first, then group by row_id. If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i]. Note: should return (eval_name, eval_result, is_higher_better) or list of such tuples. To ignore the default metric corresponding to the used objective, set the
metric parameter to the string "None" in params .
- init_model (string, Booster or None, optional (default=None)) – Filename of LightGBM model or Booster instance used for continue training.
- feature_name (list of strings or ‘auto‘, optional (default="auto")) – Feature names. If ‘auto’ and data is pandas DataFrame, data columns names are used.
- categorical_feature (list of strings or int, or ‘auto‘, optional (default="auto")) – Categorical features. If list of int, interpreted as indices. If list of strings, interpreted as feature names (need to specify
feature_name as well). If ‘auto’ and data is pandas DataFrame, pandas categorical columns are used. All values in categorical features should be less than int32 max value (2147483647). All negative values in categorical features will be treated as missing values.
- early_stopping_rounds (int or None, optional (default=None)) – Activates early stopping. The model will train until the validation score stops improving. Validation score needs to improve at least every
early_stopping_rounds round(s) to continue training. Requires at least one validation data and one metric. If there’s more than one, will check all of them. But the training data is ignored anyway. If early stopping occurs, the model will add best_iteration field.
- evals_result (dict or None, optional (default=None)) –
This dictionary used to store all evaluation results of all the items in valid_sets .
Example
With a valid_sets = [valid_set, train_set], valid_names = [‘eval’, ‘train’] and a params = (‘metric’:’logloss’) returns: {‘train’: {‘logloss’: [‘0.48253’, ‘0.35953’, …]}, ‘eval’: {‘logloss’: [‘0.480385’, ‘0.357756’, …]}}.
- verbose_eval (bool or int, optional (default=True)) –
Requires at least one validation data. If True, the eval metric on the valid set is printed at each boosting stage. If int, the eval metric on the valid set is printed at every verbose_eval boosting stage. The last boosting stage or the boosting stage found by using early_stopping_rounds is also printed.
Example
With verbose_eval = 4 and at least one item in evals, an evaluation metric is printed every 4 (instead of 1) boosting stages.
- learning_rates (list, callable or None, optional (default=None)) – List of learning rates for each boosting round or a customized function that calculates
learning_rate in terms of current number of round (e.g. yields learning rate decay).
- keep_training_booster (bool, optional (default=False)) – Whether the returned Booster will be used to keep training. If False, the returned value will be converted into _InnerPredictor before returning. You can still use _InnerPredictor as
init_model for future continue training.
- callbacks (list of callables or None, optional (default=None)) – List of callback functions that are applied at each iteration. See Callbacks in Python API for more information.
|