References: |
-
S. Bernard, L. Heutte, and S. Adam. Towards a better understanding of random forests through the study of strength and correlation. In Emerging Intelligent Computing Technology and Applications. With Aspects of Artificial Intelligence, pages 536–545. Springer, 2009
-
L. Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.
-
L. Breiman. Stacked regressions. Machine learning, 24(1):49–64, 1996
-
L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001
-
L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and regression trees. CRC press, 1984
-
C. Cortes and V. Vapnik. Support vector machine. Machine learning, 20(3):273–297, 1995
-
. Criminisi. Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semisupervised learning. Foundations and Trends® in Computer Graphics and Vision, 7(2-3):81–227, 2011
-
. Domingos. A unified bias-variance decomposition. In Proceedings of 17th International Confference on Machine Learning. Stanford CA Morgan Kaufmann, pages 231–238, 2000
-
B. Efron and R. J Tibshirani. An introduction to the bootstrap. CRC Press, 1994.
-
K. Fawagreh, M. M. Gaber, and E. Elyan. Random forests: from early developments to recent advancements. Systems Science & Control Engineering: An Open Access Journal, 2(1):602–609, 2014
-
Y. Freund and R. E. Schapire. A desicion-theoretic generalization of on-line learning and an application to boosting. In Computational Learning Theory, pages 23–37. Springer, 1995.
-
J. H. Friedman. Greedy function approximation: a gradient boosting machine.(english summary). Annals of Statistics, 29(5):1189–1232, 2001.
-
J. H. Friedman. Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4):367–378, 2002
-
J. H. Friedman and P. Hall. On bagging and nonlinear estimation. Journal of Statistical Planning and Inference, 137(3):669–683, 2007
-
S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural Computation, 4(1):1–58, 1992.
-
P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Machine Learning, 63(1):3–42, 2006.
-
T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning, 2009.
-
T. K. Ho. The random subspace method for constructing decision forests. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(8):832–844, 1998.
-
M. Ibrahim. Reducing correlation of random forest–based learning-to-rank algorithms using subsample size. Computational Intelligence, 35(4):774–798, 2019
-
M. Ibrahim. An empirical comparison of random forest-based and other learning-to-rank algorithms. Pattern Analysis and Applications, 23(3):1133–1155, 2020
-
M. Ibrahim. Understanding bias and variance of learning-to-rank algorithms: An empirical framework. Applied Artificial Intelligence, pages 1–34, 2021.
-
R. Kohavi, D. H. Wolpert, et al. Bias plus variance decomposition for zero-one loss functions. In International Confference on Machine Learning (ICML), pages 275–283, 1996
-
E. B. Kong and T. G. Dietterich. Error-correcting output coding corrects bias and variance. In International Confference on Machine Learning (ICML), pages 313–321, 1995. [24] Y. Lin and Y. Jeon. Random forests and adaptive nearest neighbors. Journal of the American Statistical Association, 101(474):578–590, 2006
-
D. Opitz and R. Maclin. Popular ensemble methods: An empirical study. Journal of artificial intelligence research, 11:169–198, 1999.
-
E. Scornet. On the asymptotics of random forests. arXiv preprint arXiv:1409.2090, 2014.
-
E. Scornet, G. Biau, and J. Vert. Consistency of random forests. arXiv preprint arXiv:1405.2881, 2014.
-
M. R. Segal. Machine learning benchmarks and random forest regression. 2004.
-
J. Shao et al. Impact of the bootstrap on sample surveys. Statistical Science, 18(2):191–198, 2003
-
V. Vapnik. The nature of statistical learning theory. Springer, 1999.
-
S. Wager. Asymptotic theory for random forests. arXiv preprint arXiv:1405.0352, 2014
-
D. H. Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992
|