Document Type
Article
Subject Area(s)
Artificial neural networks, Interpretable models, AI, Learning theory
Abstract
The reservoir computing networks (RCNs) have been successfully employed as a tool in learning and complex decision-making tasks. Despite their efficiency and low training cost, practical applications of RCNs rely heavily on empirical design. In this article, we develop an algorithm to design RCNs using the realization theory of linear dynamical systems. In particular, we introduce the notion of α-stable realization and provide an efficient approach to prune the size of a linear RCN without deteriorating the training accuracy. Furthermore, we derive a necessary and sufficient condition on the irreducibility of the number of hidden nodes in linear RCNs based on the concepts of controllability and observability from systems theory. Leveraging the linear RCN design, we provide a tractable procedure to realize RCNs with nonlinear activation functions. We present numerical experiments on forecasting time-delay systems and chaotic systems to validate the proposed RCN design methods and demonstrate their efficacy.
Digital Object Identifier (DOI)
Publication Info
Preprint version IEEE Transactions on Neural Networks and Learning Systems, 2022.
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
APA Citation
Miao, W., Narayanan, V., & Li, J. S. (2022). Interpretable Design of Reservoir Computing Networks Using Realization Theory. IEEE Transactions on Neural Networks and Learning Systems. https://doi.org/10.1109/TNNLS.2021.3136495
Included in
Applied Mathematics Commons, Artificial Intelligence and Robotics Commons, Theory and Algorithms Commons