Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Model selection in non-linear models often prioritizes performance metrics
over statistical tests, limiting the ability to account for sampling
variability. We propose the use of a statistical test to assess the equality of
variances in forecasting errors. The test builds upon the classic Morgan-Pitman
approach, incorporating enhancements to ensure robustness against data with
heavy-tailed distributions or outliers with high variance, plus a strategy to
make residuals from machine learning models statistically independent. Through
a series of simulations and real-world data applications, we demonstrate the
test's effectiveness and practical utility, offering a reliable tool for model
evaluation and selection in diverse contexts.