Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: The accurate and efficient recognition of emotional states in oneself and
others is critical, as impairments in this ability can lead to significant
psychosocial difficulties. While electroencephalography (EEG) offers a powerful
tool for emotion detection, current EEG-based emotion recognition (EER) methods
face key limitations: insufficient model stability, limited accuracy in
processing high-dimensional nonlinear EEG signals, and poor robustness against
intra-subject variability and signal noise. To address these challenges, we
introduce LEL (Lipschitz continuity-constrained Ensemble Learning), a novel
framework that enhances EEG-based emotion recognition. By integrating Lipschitz
continuity constraints, LEL ensures greater model stability and improves
generalization, thereby reducing sensitivity to signal variability and noise
while significantly boosting the model's overall accuracy and robustness. Its
ensemble learning strategy optimizes overall performance by fusing decisions
from multiple classifiers to reduce single-model bias and variance.
Experimental results on three public benchmark datasets (EAV, FACED and SEED)
demonstrated the LEL's state-of-the-art performance, achieving average
recognition accuracies of 76.43%, 83.00% and 87.22%, respectively. The official
implementation codes are released at https://github.com/NZWANG/LEL.