Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 90% Match Research Paper ML Researchers,Data Scientists,Graph Theory Experts 1 week ago

Higher-Order Regularization Learning on Hypergraphs

graph-neural-networks › graph-learning
📄 Abstract

Abstract: Higher-Order Hypergraph Learning (HOHL) was recently introduced as a principled alternative to classical hypergraph regularization, enforcing higher-order smoothness via powers of multiscale Laplacians induced by the hypergraph structure. Prior work established the well- and ill-posedness of HOHL through an asymptotic consistency analysis in geometric settings. We extend this theoretical foundation by proving the consistency of a truncated version of HOHL and deriving explicit convergence rates when HOHL is used as a regularizer in fully supervised learning. We further demonstrate its strong empirical performance in active learning and in datasets lacking an underlying geometric structure, highlighting HOHL's versatility and robustness across diverse learning settings.
Authors (3)
Adrien Weihs
Andrea Bertozzi
Matthew Thorpe
Submitted
October 30, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

Extends the theoretical foundation of Higher-Order Hypergraph Learning (HOHL) by proving the consistency of a truncated version and deriving explicit convergence rates for its use as a regularizer in supervised learning. Demonstrates strong empirical performance in active learning and diverse datasets, highlighting HOHL's versatility.

Business Value

Improves the robustness and applicability of hypergraph-based learning methods, potentially leading to more accurate predictions and better data utilization in complex relational data scenarios.