Document Type
Article
Abstract
In this article, we address two key challenges in deep reinforcement learning (DRL) setting, sample inefficiency, and slow learning, with a dual-neural network (NN)-driven learning approach. In the proposed approach, we use two deep NNs with independent initialization to robustly approximate the action-value function in the presence of image inputs. In particular, we develop a temporal difference (TD) error-driven learning (EDL) approach, where we introduce a set of linear transformations of the TD error to directly update the parameters of each layer in the deep NN. We demonstrate theoretically that the cost minimized by the EDL regime is an approximation of the empirical cost, and the approximation error reduces as learning progresses, irrespective of the size of the network. Using simulation analysis, we show that the proposed methods enable faster learning and convergence and require reduced buffer size (thereby increasing the sample efficiency).
Digital Object Identifier (DOI)
Publication Info
Published in IEEE Transactions on Neural Networks and Learning Systems, 2023.
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
APA Citation
Raghavan, K., Narayanan, V., & Jagannathan, S. (2023). Cooperative Deep Q -Learning Framework for Environments Providing Image Feedback. IEEE Transactions on Neural Networks and Learning Systems, 1–10. https://doi.org/10.1109/TNNLS.2022.3232069
Included in
Artificial Intelligence and Robotics Commons, Controls and Control Theory Commons, Theory and Algorithms Commons