Model-Free Reinforcement Learning Auto-Tuned PID Controller for a Nonlinear System
Paper ID : 1075-ICEEM2025
Authors
Abdelaziz Khater *
Department of Industrial Electronics and Control Engineering, Faculty of Electronic Engineering, Menofia University, Menof, 32852, Egypt
Abstract
This paper presents a reinforcement learning (RL)-based approach for dynamically adjusting the gains of a proposed PID controller using an actor-critic framework. To enhance computational efficiency, a single recurrent modified Elman neural network (MENN) is employed for the actor-critic implementation, streamlining the process while maintaining robustness. Additionally, a novel reward function is introduced to accelerate the learning process, which is integrated with the temporal difference method for effective weight updates. Stability guarantees are ensured through Lyapunov stability theory, guiding the selection of an appropriate learning rate to maintain system integrity during adaptation. Comparative evaluations with existing controllers underscore the superior performance of the proposed method, achieving minimal performance index values while eliminating oscillations and steady-state errors typically observed in benchmark approaches. The results indicate a significant improvement in system response and stability, demonstrating the effectiveness of the proposed RL-based PID controller in various dynamic environments. This research contributes to the advancement of adaptive control strategies in complex systems.
Keywords
Adaptive PID Controller, Reinforcement Learning, Reward Signal, Lyapunov Stability, Modified Elman Neural Network, Nonlinear System.
Status: Accepted