نوع مقاله : علمی-پژوهشی
نویسندگان
1 استاد، دانشکده مهندسی برق، دانشگاه سمنان، ایران
2 دانشجوی دکتری، دانشکده مهندسی برق دانشگاه سمنان،سمنان، ایران
3 عضو هیئت علمی/ دانشگاه صنعتی امیرکبیر
چکیده
کلیدواژهها
موضوعات
عنوان مقاله [English]
نویسندگان [English]
This paper introduces an event-triggered inverse reinforcement learning (IRL) approach for multi-agent discrete-time graphical games with unknown dynamics. In the IRL problem for these games, the expert and the learner systems are both leader-follower multi-agent systems. The optimal synchronization of the follower agents with the leader is the objective of the expert system. Learner agents intend to imitate the control inputs and states of the expert agents, while the expert value function is unknown to them. For the learner system, an IRL algorithm using value iteration adaptive dynamic programming has been presented to recreate the unknown value function of the expert and solve the event-triggered coupled Hamiltonian-Jacobi Bellman equations with no need for the expert and learner system dynamics. To implement the presented algorithm, an actor-critic-state penalty structure is used, and the unknown dynamics of expert and learning multi-agent systems are approximated by neural network identifiers. Unlike traditional adaptive dynamic programming, where the control policies are periodically updated, in the presented method, the control policies and weights for neural networks are updated only at the triggered event. Therefore, the computational complexity has decreased. Finally, the efficiency of the recommended technique is shown through simulated results.
کلیدواژهها [English]