Combining Weak Learners into a Strong Classifier

后端 未结 2 1311
暗喜
暗喜 2021-02-09 13:37

How do I combine few weak learners into a strong classifier? I know the formula, but the problem is that in every paper about AdaBoost that I\'ve read there are only formulas wi

2条回答
  •  甜味超标
    2021-02-09 14:10

    In vanilla Ada Boost, you don't multiply learners by any weight. Instead, you increase the weight of the misclassified data. Imagine that you have an array, such as [1..1000], and you want to use neural networks to estimate which numbers are primes. (Stupid example, but suffices for demonstration).

    Imagine that you have class NeuralNet. You instantiate the first one, n1 = NeuralNet.new. Then you have the training set, that is, another array of primes from 1 to 1000. (You need to make up some feature set for a number, such as its digits.). Then you train n1 to recognize primes on your training set. Let's imagine that n1 is weak, so after the training period ends, it won't be able to correctly classify all numbers 1..1000 into primes and non-primes. Let's imagine that n1 incorrectly says that 27 is prime and 113 is non-prime, and makes some other mistakes. What do you do? You instantiate another NeuralNet, n2, and increase the weight of 27, 113 and other mistaken numbers, let's say, from 1 to 1.5, and decrease the weight of the correctly classified numbers from 1 to 0.667. Then you train n2. After training, you'll find that n2 corrected most of the mistakes of n1, including no. 27, but no. 113 is still misclassified. So you instantiate n3, increase the weight of 113 to 2, decrease the weight of 27 and other now correctly classified numbers to 1, and decrease the weight of old correctly classified numbers to 0.5. And so on...

    Am I being concrete enough?

提交回复
热议问题