Robust Self-Tuning Control Design under Probabilistic Uncertainty using Polynomial Chaos Expansion-based Markov Models
A robust adaptive controller is developed for a chemical process using a generalized Polynomial Chaos (gPC) expansion-based Markov decision model, which can account for time-invariant probabilistic uncertainty and overcome computational challenge for building Markov models. To calculate the transition probability, a gPC model is used to iteratively predict probability density functions (PDFs) of system’s states including controlled and manipulated variables. For controller tuning, these PDFs and controller parameters are discretized to a finite number of discrete states for building a Markov model. The key idea is to predict the transition probability of controlled and manipulated variables over a finite future control horizon, which can be further used to calculate an optimal sequence of control actions. This approach can be used to optimally tune a controller for set point tracking within a finite future control horizon. The proposed method is illustrated by a continuous stirred tank reactor (CSTR) system with stochastic perturbations in the inlet concentration. The efficiency of the proposed algorithm is quantified in terms of control performance and transient decay.