I implemented Adaboost for a project, but I\'m not sure if I\'ve understood adaboost correctly. Here\'s what I implemented, please let me know if it is a correct interpretation.
Boosting summary:
1- Train your first weak classifier by using the training data
2- The 1st trained classifier makes mistake on some samples and correctly classifies others. Increase the weight of the wrongly classified samples and decrease the weight of correct ones. Retrain your classifier with these weights to get your 2nd classifier.
In your case, you first have to resample with replacement from your data with these updated weights, create a new training data and then train your classifier over these new data.
3- Repeat the 2nd step T times and at the end of each round, calculate the alpha weight for the classifier according to the formula. 4- The final classifier is the weighted sum of the decisions of the T classifiers.
It is hopefully clear from this explanation that you have done it abit wrongly. Instead of retrain your network with the new data set, you trained them all over the original dataset. In fact you are kind of using random forest type classifier (except that you are using NN instead of decision trees) ensemble.
PS: There is no guarantee that boosting increases the accuracy. In fact, so far all the boosting methods that I'm aware of were unsuccessful to improve the accuracy with NN as weak learners (The reason is because of the way that boosting works and needs a lengthier discussion).