Custom kernels for SVM, when to apply them?

后端 未结 1 1108
-上瘾入骨i
-上瘾入骨i 2021-02-06 11:08

I am new to machine learning field and right now trying to get a grasp of how the most common learning algorithms work and understand when to apply each one of them. At the mome

相关标签:
1条回答
  • 2021-02-06 11:59

    1) What are other possible kernels for SVMs?

    There are infinitely many of these, see for example list of ones implemented in pykernels (which is far from being exhaustive)

    https://github.com/gmum/pykernels

    • Linear
    • Polynomial
    • RBF
    • Cosine similarity
    • Exponential
    • Laplacian
    • Rational quadratic
    • Inverse multiquadratic
    • Cauchy
    • T-Student
    • ANOVA
    • Additive Chi^2
    • Chi^2
    • MinMax
    • Min/Histogram intersection
    • Generalized histogram intersection
    • Spline
    • Sorensen
    • Tanimoto
    • Wavelet
    • Fourier
    • Log (CPD)
    • Power (CPD)

    2) In which situation one would apply custom kernels?

    Basically in two cases:

    • "simple" ones give very bad results
    • data is specific in some sense and so - in order to apply traditional kernels one has to degenerate it. For example if your data is in a graph format, you cannot apply RBF kernel, as graph is not a constant-size vector, thus you need a graph kernel to work with this object without some kind of information-loosing projection. also sometimes you have an insight into data, you know about some underlying structure, which might help classifier. One such example is a periodicity, you know that there is a kind of recuring effect in your data - then it might be worth looking for a specific kernel etc.

    3) Can custom kernel substantially improve prediction quality of SVM?

    Yes, in particular there always exists a (hypothethical) Bayesian optimal kernel, defined as:

    K(x, y) = 1 iff arg max_l P(l|x) == arg max_l P(l|y)
    

    in other words, if one has a true probability P(l|x) of label l being assigned to a point x, then we can create a kernel, which pretty much maps your data points onto one-hot encodings of their most probable labels, thus leading to Bayes optimal classification (as it will obtain Bayes risk).

    In practise it is of course impossible to get such kernel, as it means that you already solved your problem. However, it shows that there is a notion of "optimal kernel", and obviously none of the classical ones is not of this type (unless your data comes from veeeery simple distributions). Furthermore, each kernel is a kind of prior over decision functions - closer you get to the actual one with your induced family of functions - the more probable is to get a reasonable classifier with SVM.

    0 讨论(0)
提交回复
热议问题