Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: High-dimensional measurements are often correlated which motivates their
approximation by factor models. This holds also true when features are
engineered via low-dimensional interactions or kernel tricks. This often
results in over parametrization and requires a fast dimensionality reduction.
We propose a simple technique to enhance the performance of supervised learning
algorithms by augmenting features with factors extracted from design matrices
and their transformations. This is implemented by using the factors and
idiosyncratic residuals which significantly weaken the correlations between
input variables and hence increase the interpretability of learning algorithms
and numerical stability. Extensive experiments on various algorithms and
real-world data in diverse fields are carried out, among which we put special
emphasis on the stock return prediction problem with Chinese financial news
data due to the increasing interest in NLP problems in financial studies. We
verify the capability of the proposed feature augmentation approach to boost
overall prediction performance with the same algorithm. The approach bridges a
gap in research that has been overlooked in previous studies, which focus
either on collecting additional data or constructing more powerful algorithms,
whereas our method lies in between these two directions using a simple PCA
augmentation.