Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 88% Match Research Paper Machine learning theorists,Researchers in statistical learning,Developers focused on model robustness 1 week ago

A Framework for Bounding Deterministic Risk with PAC-Bayes: Applications to Majority Votes

ai-safety › robustness
📄 Abstract

Abstract: PAC-Bayes is a popular and efficient framework for obtaining generalization guarantees in situations involving uncountable hypothesis spaces. Unfortunately, in its classical formulation, it only provides guarantees on the expected risk of a randomly sampled hypothesis. This requires stochastic predictions at test time, making PAC-Bayes unusable in many practical situations where a single deterministic hypothesis must be deployed. We propose a unified framework to extract guarantees holding for a single hypothesis from stochastic PAC-Bayesian guarantees. We present a general oracle bound and derive from it a numerical bound and a specialization to majority vote. We empirically show that our approach consistently outperforms popular baselines (by up to a factor of 2) when it comes to generalization bounds on deterministic classifiers.
Authors (2)
Benjamin Leblanc
Pascal Germain
Submitted
October 29, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

This paper proposes a unified framework to derive generalization guarantees for single, deterministic hypotheses from stochastic PAC-Bayesian bounds. It presents a general oracle bound and a numerical bound, specialized for majority vote classifiers, which empirically outperform popular baselines for generalization bounds on deterministic classifiers.

Business Value

Provides stronger theoretical assurances for the generalization performance of deployed models, increasing confidence in their reliability and robustness in real-world applications.