问题
What's the best way to handle missing feature attribute values with Weka's C4.5 (J48) decision tree? The problem of missing values occurs during both training and classification.
If values are missing from training instances, am I correct in assuming that I place a '?' value for the feature?
Suppose that I am able to successfully build the decision tree and then create my own tree code in C++ or Java from Weka's tree structure. During classification time, if I am trying to classify a new instance, what value do I put for features that have missing values? How would I descend the tree past a decision node for which I have an unknown value?
Would using Naive Bayes be better for handling missing values? I would just assign a very small non-zero probability for them, right?
回答1:
From Pedro Domingos' ML course in University of Washington:
Here are three approaches what Pedro suggests for missing value of A
:
- Assign most common value of
A
among other examples sorted to noden
- Assign most common value of
A
among other examples with same target value - Assign probability
p_i
to each possible valuev_i
ofA
; Assign fractionp_i
of example to each descendant in tree.
The slides and video is now viewable at here.
回答2:
An alternative approach is to leave the missing value as the '?', and not use it for the information gain calculation. No node should have an unknown value during classification because you ignored it during the information gain step. For classifying, I believe you simply consider the missing value unknown and do not delete it during classification on that specific attribute.
来源:https://stackoverflow.com/questions/13425722/how-to-deal-with-missing-attribute-values-in-c4-5-j48-decision-tree