shamebear (shamebear) wrote in ai_research,
shamebear
shamebear
ai_research

Ensemble methods

Ensemble methods, especially bagging and boosting, are well-established methods (for an introduction see here.) But papers on it gives the impression that atleast bagging (using several predictors or classifiers in parallell and e.g. taking their average) is not completely understood.

Most papers agree that variance is reduced given that the classifiers have a sufficient "diversity", but how this ties in with the bias-variance theorem or even mean square error, is unclear. The paper "The Effect of Bagging on Variance, Bias, and Mean Squared Error" by Andreas Buja and Werner Stuetzle offer some leads, but I find no definitive account of these issues.

Do rigorous results on bagging vs bias, variance and MSE exist or is it mostly empirically based?
  • Post a new comment

    Error

    default userpic
  • 0 comments