multi-adaboost
multi-adaboost copied to clipboard
discrete_boost/learning_rate
When learning rate is changed to one number greater than 1, the Adaboost accuracy using 'SAMM' algorithm is reduced compared to original 'sklearn.ensemble.AdaBoostClassifier' algorithm. I find the difference in the 'discrete_boost' function. the equation is revised as follows: estimator_weight = self.learning_rate_ * (np.log((1. - estimator_error) / estimator_error) + np.log(self.n_classes_ - 1.))
You mean the 1 and 1. will have different result when the learning rate is greater than 1? In my test, there is no difference, so could you please provide more details? By the way, do you use py2 or py3?