I am doing multi-label classification where I am trying to predict correct tags to questions:
(X = questions, y = list of tags for each question from X).
I am wo
I think the question of which should be used is best left up to a situational. That could easily be a part of your GridSearch. But just intuitively I would feel that as far as differences go you are going to be doing the same thing. Here is my reasoning:
OneVsRestClassifier
is designed to model each class against all of the other classes independently, and create a classifier for each situation. The way I understand this process is that OneVsRestClassifier
grabs a class, and creates a binary label for whether a point is or isn't that class. Then this labelling gets fed into whatever estimator you have chosen to use. I believe the confusion comes in in that SVC
also allows you to make this same choice, but in effect with this implementation the choice will not matter because you will always only be feeding two classes into the SVC
.
And here is an example:
from sklearn.datasets import load_iris
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
data = load_iris()
X, y = data.data, data.target
estim1 = OneVsRestClassifier(SVC(kernel='linear', decision_function_shape='ovo'))
estim1.fit(X,y)
estim2 = OneVsRestClassifier(SVC(kernel='linear', decision_function_shape='ovr'))
estim2.fit(X,y)
print(estim1.coef_ == estim2.coef_)
array([[ True, True, True, True],
[ True, True, True, True],
[ True, True, True, True]], dtype=bool)
So you can see the coefficients are all equal for all three estimators built by the two models. Granted this dataset only has 150 samples and 3 classes so it is possible these results could be different for a more complex dataset, but it's a simple proof of concept.