• Printed Journal
  • Indexed Journal
  • Peer Reviewed Journal
Journal of Applied Science & Engineering

Dhaka University Journal of Applied Science & Engineering

Issue: Vol. 7, No. 1, January 2022
Title: Evolution of Random Forest from Decision Tree and Bagging: A Bias-Variance Perspective
Authors:
  • Muhammad Ibrahim
    Department of Computer Science and Engineering, University of Dhaka, Dhaka-1000, Bangladesh
DOI:
Keywords: Supervised machine learning, Ensemble learning algorithms, Decision tree, Bagging, Random forests, Bias-variance tradeoff, Correlation.
Abstract:

The ensemble methods are one of the most heavily used techniques in machine learning. The random forest arguably spearheads this army of learners. Being sprung from the decision tree in the late 90s, the benefits of a random forest have rightfully attracted practitioners to widely and successfully apply this powerful yet simple-to-understand technique to numerous applications. In this study we explain the evolution of a random forest from a decision tree in the context of bias and variance of learning theory. While doing so, we focus on the interplay between the correlation and generalization error of the random forest. This analysis is expected to enrich the literature of random forests by providing further insight into its working mechanism. These insights will assist the practitioners of the random forest implement this algorithm more wisely and in an informed way.

References:
  1. S. Bernard, L. Heutte, and S. Adam. Towards a better understanding of random forests through the study of strength and correlation. In Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence, pages 536–545. Springer, 2009
  2. L. Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.
  3. L. Breiman. Stacked regressions. Machine learning, 24(1):49–64, 1996
  4. L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001
  5. L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and regression trees. CRC press, 1984
  6. C. Cortes and V. Vapnik. Support vector machine. Machine learning, 20(3):273–297, 1995
  7. . Criminisi. Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semisupervised learning. Foundations and Trends® in Computer Graphics and Vision, 7(2-3):81–227, 2011
  8. . Domingos. A unified bias-variance decomposition. In Proceedings of 17th International Confference on Machine Learning. Stanford CA Morgan Kaufmann, pages 231–238, 2000
  9. B. Efron and R. J Tibshirani. An introduction to the bootstrap. CRC Press, 1994.
  10. K. Fawagreh, M. M. Gaber, and E. Elyan. Random forests: from early developments to recent advancements. Systems Science & Control Engineering: An Open Access Journal, 2(1):602–609, 2014
  11. Y. Freund and R. E. Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, pages 23–37. Springer, 1995.
  12. J. H. Friedman. Greedy function approximation: a gradient boosting machine.(english summary). Annals of Statistics, 29(5):1189–1232, 2001.
  13. J. H. Friedman. Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367–378, 2002
  14. J. H. Friedman and P. Hall. On bagging and nonlinear estimation. Journal of Statistical Planning and Inference, 137(3):669–683, 2007
  15. S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural Computation, 4(1):1–58, 1992.
  16. P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Machine Learning, 63(1):3–42, 2006.
  17. T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning, 2009.
  18. T. K. Ho. The random subspace method for constructing decision forests. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(8):832–844, 1998.
  19. M. Ibrahim. Reducing correlation of random forest–based learning-to-rank algorithms using subsample size. Computational Intelligence, 35(4):774–798, 2019
  20. M. Ibrahim. An empirical comparison of random forest-based and other learning-to-rank algorithms. Pattern Analysis and Applications, 23(3):1133–1155, 2020
  21. M. Ibrahim. Understanding bias and variance of learning-to-rank algorithms: An empirical framework. Applied Artificial Intelligence, pages 1–34, 2021.
  22. R. Kohavi, D. H. Wolpert, et al. Bias plus variance decomposition for zero-one loss functions. In International Confference on Machine Learning (ICML), pages 275–283, 1996
  23. E. B. Kong and T. G. Dietterich. Error-correcting output coding corrects bias and variance. In International Confference on Machine Learning (ICML), pages 313–321, 1995. [24] Y. Lin and Y. Jeon. Random forests and adaptive nearest neighbors. Journal of the American Statistical Association, 101(474):578–590, 2006
  24. D. Opitz and R. Maclin. Popular ensemble methods: An empirical study. Journal of artificial intelligence research, 11:169–198, 1999.
  25. E. Scornet. On the asymptotics of random forests. arXiv preprint arXiv:1409.2090, 2014.
  26. E. Scornet, G. Biau, and J. Vert. Consistency of random forests. arXiv preprint arXiv:1405.2881, 2014.
  27. M. R. Segal. Machine learning benchmarks and random forest regression. 2004.
  28. J. Shao et al. Impact of the bootstrap on sample surveys. Statistical Science, 18(2):191–198, 2003
  29. V. Vapnik. The nature of statistical learning theory. Springer, 1999.
  30. S. Wager. Asymptotic theory for random forests. arXiv preprint arXiv:1405.0352, 2014
  31. D. H. Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992