Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Uncertainty quantification (UQ) in scientific machine learning is
increasingly critical as neural networks are widely adopted to tackle complex
problems across diverse scientific disciplines. For physics-informed neural
networks (PINNs), a prominent model in scientific machine learning, uncertainty
is typically quantified using Bayesian or dropout methods. However, both
approaches suffer from a fundamental limitation: the prior distribution or
dropout rate required to construct honest confidence sets cannot be determined
without additional information. In this paper, we propose a novel method within
the framework of extended fiducial inference (EFI) to provide rigorous
uncertainty quantification for PINNs. The proposed method leverages a
narrow-neck hyper-network to learn the parameters of the PINN and quantify
their uncertainty based on imputed random errors in the observations. This
approach overcomes the limitations of Bayesian and dropout methods, enabling
the construction of honest confidence sets based solely on observed data. This
advancement represents a significant breakthrough for PINNs, greatly enhancing
their reliability, interpretability, and applicability to real-world scientific
and engineering challenges. Moreover, it establishes a new theoretical
framework for EFI, extending its application to large-scale models, eliminating
the need for sparse hyper-networks, and significantly improving the
automaticity and robustness of statistical inference.
Authors (3)
Frank Shih
Zhenghao Jiang
Faming Liang
Key Contributions
This paper proposes a novel method for uncertainty quantification (UQ) in Physics-Informed Neural Networks (PINNs) using Extended Fiducial Inference (EFI). It overcomes limitations of Bayesian and dropout methods by using a hyper-network to learn PINN parameters and quantify uncertainty based on imputed random errors, enabling the construction of rigorous confidence sets.
Business Value
Enhances the trustworthiness and reliability of AI models used in scientific and engineering applications, leading to safer and more robust decision-making in critical systems.