boost原理与sklearn源码_从sklearn源码简析GBDT
GBDT即Gradient Boosting Decision Tree,它在我个人认为是可以在传统的一众机器学习算法中占据Top1的地位,由它衍生出的Xgboost、Lightgbm和Catboost在表格型数据建模领域里我愿称之为最强。针对GBDT的原理讲解网上已经有很多优秀的版本,我这里也不再赘述,这里推荐下刘建平大佬的博客https://www.cnblogs.com/pinard/p/6140514.html
,本文主要从sklearn源码里来解析GBDT是如何进行模型构建的。
GBDT可以做回归任务也可以做分类任务,无论我们是使用sklearn里的GradientBoostingClassifier还是GradientBoostingRegressor,他们对应的父类均为BaseGradientBoosting,下面我就开始对这个类做详细的分析(sklearn为了兼容性,代码里包括了很多输入参数检验的工作,这部分代码我会做修改,丢掉过多的包袱也方便切入重点)。
BaseGradientBoosting类初始化的参数主要供调参使用,直接使用默认的参数基本就可以构建一个比较不错的baseline,这些参数包括了迭代轮数,学习率,loss以及一些控制树生长和分裂策略的参数等。
def __init__(self, loss, learning_rate, n_estimators, criterion, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_depth, min_impurity_decrease, min_impurity_split, init, subsample, max_features, ccp_alpha, random_state, alpha=0.9, verbose=0, max_leaf_nodes=None, warm_start=False, presort='deprecated', validation_fraction=0.1, n_iter_no_change=None, tol=1e-4): self.n_estimators = n_estimators self.learning_rate = learning_rate self.loss = loss self.criterion = criterion self.min_samples_split = min_samples_split self.min_samples_leaf = min_samples_leaf self.min_weight_fraction_leaf = min_weight_fraction_leaf self.subsample = subsample self.max_features = max_features self.max_depth = max_depth self.min_impurity_decrease = min_impurity_decrease self.min_impurity_split = min_impurity_split self.ccp_alpha = ccp_alpha self.init = init self.random_state = random_state self.alpha = alpha self.verbose = verbose self.max_leaf_nodes = max_leaf_nodes self.warm_start = warm_start self.presort = presort self.validation_fraction = validation_fraction self.n_iter_no_change = n_iter_no_change self.tol = tol
下面直接看fit方法,首先是warm_start参数,这个参数决定了训练的学习器是否需要被清空,即重新训练。然后就是sample_weight,即样本权重,如果不设置就会初始化为1。接着对模型进行初始化,将预测值默认为0,不过我们也可以自己去设定一个基学习器去根据X来预测一个初始的y。
def fit(self, X, y, sample_weight=None, monitor=None): if not self.warm_start: self._clear_state() n_samples, self.n_features_ = X.shape sample_weight_is_none = sample_weight is None if sample_weight_is_none: sample_weight = np.ones(n_samples, dtype=np.float32) else: sample_weight_is_none = False X, X_val, y, y_val, sample_weight, sample_weight_val =train_test_split(X, y, sample_weight, random_state=self.random_state, test_size=self.validation_fraction) if self.init_ == 'zero': raw_predictions = np.zeros(shape=(X.shape[0], self.loss_.K), dtype=np.float64) else: if sample_weight_is_none: self.init_.fit(X, y) else: msg = ("The initial estimator {} does not support sample " "weights.".format(self.init_.__class__.__name__)) try: self.init_.fit(X, y, sample_weight=sample_weight) except TypeError: raise ValueError(msg) except ValueError as e: if "pass parameters to specific steps of " \ "your pipeline using the " \ "stepname__parameter" in str(e): raise ValueError(msg) from e else: raise raw_predictions = \ self.loss_.get_init_raw_predictions(X, self.init_) begin_at_stage = 0 self._rng = self.random_state X_idx_sorted = None n_stages = self._fit_stages( X, y, raw_predictions, sample_weight, self._rng, X_val, y_val, sample_weight_val, begin_at_stage, monitor, X_idx_sorted) self.n_estimators_ = n_stages return self
再下来就是核心的_fit_stages,这部分就是构建整个gbdt模型的过程,_fit_stages其实一直在迭代_fit_stage,这里我们可以直接看_fit_stage的源码。
参数里的i即为当前的迭代轮数,这边需要注意的是多分类的问题,K分类的话每次迭代的estimator由K棵回归树构成。GBDT的核心就是负梯度的更新上,代码里对应loss部分。
def _fit_stage(self, i, X, y, raw_predictions, sample_weight, random_state, X_idx_sorted): loss = self.loss_ original_y = y raw_predictions_copy = raw_predictions.copy() for k in range(loss.K): if loss.is_multi_class: y = np.array(original_y == k, dtype=np.float64) residual = loss.negative_gradient(y, raw_predictions_copy, k=k, sample_weight=sample_weight) tree = DecisionTreeRegressor( criterion=self.criterion, splitter='best', max_depth=self.max_depth, min_samples_split=self.min_samples_split, min_samples_leaf=self.min_samples_leaf, min_weight_fraction_leaf=self.min_weight_fraction_leaf, min_impurity_decrease=self.min_impurity_decrease, min_impurity_split=self.min_impurity_split, max_features=self.max_features, max_leaf_nodes=self.max_leaf_nodes, random_state=random_state, ccp_alpha=self.ccp_alpha) tree.fit(X, residual, sample_weight=sample_weight, check_input=False, X_idx_sorted=X_idx_sorted) loss.update_terminal_regions( tree.tree_, X, y, residual, raw_predictions, sample_weight, learning_rate=self.learning_rate, k=k) self.estimators_[i, k] = tree return raw_predictions
针对不同的loss有不同的梯度更新方式(raw_predictions为模型根据X的预测值),如:
· 回归 MSE
residual = y - raw_predictions.ravel()
· 回归 MAE
residual = 2 * (y - raw_predictions >0) - 1
· 回归 Huber
raw_predictions =raw_predictions.ravel()diff = y - raw_predictionsgamma = np.percentile(np.abs(diff), self.alpha * 100)gamma_mask = np.abs(diff) <= gammaresidual = np.zeros((y.shape[0],), dtype=np.float64)residual[gamma_mask] = diff[gamma_mask]residual[~gamma_mask] = gamma * np.sign(diff[~gamma_mask])
· 回归 Quantile
raw_predictions =raw_predictions.ravel()mask = y > raw_predictionsresidual = (alpha * mask) - ((1 - alpha) * ~mask)
· 二分类 Logistic sigmoid
residual = y -scipy.special.expit(raw_predictions.ravel())
· 二分类 Exponential loss,来源于AdaBoost
y_ = -(2. * y - 1.)residual = y_ * np.exp(y_ * raw_predictions.ravel())
· 多分类 Softmax
residual = y -np.nan_to_num(np.exp(raw_predictions[:, k] - logsumexp(raw_predictions, axis=1))
补充一个小插曲,知乎ID石塔西曾提出了这样的一个问题,针对m*n的数据集,如果用GBDT进行建模的话,模型对应的梯度是多少维,m
维?
n维?m*n维?难道
和
决策树的深度有关?亦或者与树的叶子节点的个数有关?还是都有关联?文章链接:https://www.zhihu.com/question/62482926/answer
/526988250,这里可以根据源码就可以轻松得出这个问题的答案为m维。
我们根据不同的任务类型计算出对应的残差之后,就可以建立CART回归树,其中输入仍为X,但target label此时变成了每一类迭代计算得出的残差,最后再看raw_predictions累计更新的代码,我们以二分类Logistic sigmoid为例:
1. 根据样本sample找出对应的叶子节点
2. 根据叶子节点的索引得出残差值
3. 我们用sum(y - prob) / sum(prob * (1 - prob))来更新树的叶子节点值,这里y - prob即为残差。
4. 乘上学习率learning_rate再加上之前的预测结果即作为更新后的raw_predictions。
回归任务不使用上述的第三步。
def update_terminal_regions(self, tree, X, y, residual, raw_predictions, sample_weight, learning_rate=0.1, k=0): terminal_regions = tree.apply(X) for leaf in np.where(tree.children_left == TREE_LEAF)[0]: self._update_terminal_region(tree, terminal_regions, leaf, X, y, residual, raw_predictions[:, k], sample_weight) raw_predictions[:, k] += \ learning_rate * tree.value[:, 0, 0].take(terminal_regions, axis=0) def _update_terminal_region(self, tree, terminal_regions, leaf, X, y, residual, raw_predictions, sample_weight): terminal_region = np.where(terminal_regions == leaf)[0] residual = residual.take(terminal_region, axis=0) y = y.take(terminal_region, axis=0) sample_weight = sample_weight.take(terminal_region, axis=0) numerator = np.sum(sample_weight * residual) denominator = np.sum(sample_weight * (y - residual) * (1 - y + residual)) if abs(denominator) < 1e-150: tree.value[leaf, 0, 0] = 0.0 else: tree.value[leaf, 0, 0] = numerator / denominator
在_fit_stages里会计算累计的loss情况,我们可以设定n_iter_no_change的数量来决定模型是否继续训练(即early stopping)。最后得到的就是训练好的N个estimators,后面的预测也由raw_predictions而来。
GBDT里关于特征重要性的计算代码如下,主要就是将所有轮决策树的特征重要性做均值统计。
def feature_importances_(self): relevant_trees = [tree for stage in self.estimators_ for tree in stage if tree.tree_.node_count > 1] if not relevant_trees: return np.zeros(shape=self.n_features_, dtype=np.float64) relevant_feature_importances = [ tree.tree_.compute_feature_importances(normalize=False) for tree in relevant_trees ] avg_feature_importances = np.mean(relevant_feature_importances, axis=0, dtype=np.float64) return avg_feature_importances / np.sum(avg_feature_importances)
本文从sklearn的源码解析了GBDT的整个建模过程,当我们对GBDT的原理有所熟悉之后再去看代码的实现就会加深对整体的理解,如果认为文章写得有错误,请不吝指出,谢谢。