Classification tree in sklearn giving inconsistent answers

前端 未结 4 1150
忘了有多久
忘了有多久 2021-02-09 12:44

I am using a classification tree from sklearn and when I have the the model train twice using the same data, and predict with the same test data, I am getting diffe

相关标签:
4条回答
  • 2021-02-09 13:08

    I don't know anything about sklearn but...

    I guess DecisionTreeClassifier has some internal state, create by fit, which only gets updated/extended.

    You should create a new one?

    0 讨论(0)
  • 2021-02-09 13:17

    The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed.

    Source: http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier#Notes

    0 讨论(0)
  • 2021-02-09 13:22

    The DecisionTreeClassifier works by repeatedly splitting the training data, based on the value of some feature. The Scikit-learn implementation lets you choose between a few splitting algorithms by providing a value to the splitter keyword argument.

    • "best" randomly chooses a feature and finds the 'best' possible split for it, according to some criterion (which you can also choose; see the methods signature and the criterion argument). It looks like the code does this N_feature times, so it's actually quite like a bootstrap.

    • "random" chooses the feature to consider at random, as above. However, it also then tests randomly-generated thresholds on that feature (random, subject to the constraint that it's between its minimum and maximum values). This may help avoid 'quantization' errors on the tree where the threshold is strongly influenced by the exact values in the training data.

    Both of these randomization methods can improve the trees' performance. There are some relevant experimental results in Lui, Ting, and Fan's (2005) KDD paper.

    If you absolutely must have an identical tree every time, then I'd re-use the same random_state. Otherwise, I'd expect the trees to end up more or less equivalent every time and, in the absence of a ton of held-out data, I'm not sure how you'd decide which random tree is best.

    See also: Source code for the splitter

    0 讨论(0)
  • 2021-02-09 13:22

    The answer provided by Matt Krause does not answer the question entirely correctly.

    The reason for the observed behaviour in scikit-learn's DecisionTreeClassifier is explained in this issue on GitHub.

    When using the default settings, all features are considered at each split. This is governed by the max_features parameter, which specifies how many features should be considered at each split. At each node, the classifier randomly samples max_features without replacement (!).

    Thus, when using max_features=n_features, all features are considered at each split. However, the implementation will still sample them at random from the list of features (even though this means all features will be sampled, in this case). Thus, the order in which the features are considered is pseudo-random. If two possible splits are tied, the first one encountered will be used as the best split.

    This is exactly the reason why your decision tree is yielding different results each time you call it: the order of features considered is randomized at each node, and when two possible splits are then tied, the split to use will depend on which one was considered first.

    As has been said before, the seed used for the randomization can be specified using the random_state parameter.

    0 讨论(0)
提交回复
热议问题