https://doi.org/10.1109/TNNLS.2021.3136495

">
 

Document Type

Article

Subject Area(s)

Artificial neural networks, Interpretable models, AI, Learning theory

Abstract

The reservoir computing networks (RCNs) have been successfully employed as a tool in learning and complex decision-making tasks. Despite their efficiency and low training cost, practical applications of RCNs rely heavily on empirical design. In this article, we develop an algorithm to design RCNs using the realization theory of linear dynamical systems. In particular, we introduce the notion of α-stable realization and provide an efficient approach to prune the size of a linear RCN without deteriorating the training accuracy. Furthermore, we derive a necessary and sufficient condition on the irreducibility of the number of hidden nodes in linear RCNs based on the concepts of controllability and observability from systems theory. Leveraging the linear RCN design, we provide a tractable procedure to realize RCNs with nonlinear activation functions. We present numerical experiments on forecasting time-delay systems and chaotic systems to validate the proposed RCN design methods and demonstrate their efficacy.

Digital Object Identifier (DOI)

https://doi.org/10.1109/TNNLS.2021.3136495

APA Citation

Miao, W., Narayanan, V., & Li, J. S. (2022). Interpretable Design of Reservoir Computing Networks Using Realization Theory. IEEE Transactions on Neural Networks and Learning Systems. https://doi.org/10.1109/TNNLS.2021.3136495

Share

COinS