Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 70% Match Research Paper Control Engineers,Robotics Researchers,Signal Processing Engineers,Data Scientists 17 hours ago

A Kullback-Leibler divergence method for input-system-state identification

reinforcement-learning › alignment
📄 Abstract

Abstract: The capability of a novel Kullback-Leibler divergence method is examined herein within the Kalman filter framework to select the input-parameter-state estimation execution with the most plausible results. This identification suffers from the uncertainty related to obtaining different results from different initial parameter set guesses, and the examined approach uses the information gained from the data in going from the prior to the posterior distribution to address the issue. Firstly, the Kalman filter is performed for a number of different initial parameter sets providing the system input-parameter-state estimation. Secondly, the resulting posterior distributions are compared simultaneously to the initial prior distributions using the Kullback-Leibler divergence. Finally, the identification with the least Kullback-Leibler divergence is selected as the one with the most plausible results. Importantly, the method is shown to select the better performed identification in linear, nonlinear, and limited information applications, providing a powerful tool for system monitoring.

Key Contributions

This paper proposes a novel method using Kullback-Leibler divergence within the Kalman filter framework to select the most plausible input-parameter-state estimation. It addresses the uncertainty arising from different initial parameter guesses by comparing prior and posterior distributions.

Business Value

Enhances the reliability and accuracy of state estimation in dynamic systems, which is critical for applications like autonomous navigation, industrial process control, and financial modeling.