Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
Develops a reinforcement learning approach, specifically an actor-critic sampling scheme, to learn efficient sampling strategies for goodness-of-fit testing in discrete exponential families. This method addresses the computational difficulty of sampling from high-dimensional lattice points in polytopes, outperforming traditional MCMC samplers in challenging cases.
Enables more reliable and efficient statistical inference for complex models, leading to better model validation and decision-making in fields relying on statistical testing.