Date of Award
2018
Document Type
Open Access Dissertation
Department
Computer Science and Engineering
First Advisor
Jason D. Bakos
Abstract
Artificial neural networks are an effective machine learning technique for a variety of data sets and domains, but exploiting the inherent parallelism in neural networks requires specialized hardware. Typically, computing the output of each neuron requires many multiplications, evaluation of a transcendental activation function, and transfer of its output to a large number of other neurons. These restrictions become more expensive when internal values are represented with increasingly higher data precision. A spiking neural network eliminates the limitations of typical rate-based neural networks by reducing neuron output and synapse weights to one-bit values, eliminating hardware multipliers, and simplifying the activation function. However, a spiking neural network requires a larger number of neurons than what is needed in a comparable rate-based network. In order to determine if the benefits of spiking neural networks outweigh the costs, we designed the smallest spiking neural network and rate-based artificial neural network that achieved 90% or comparable testing accuracy on the MNIST data set. After estimating the FPGA storage requirements for synapse values of each network, we concluded rate-based neural networks need significantly fewer bits than spiking neural networks.
Rights
© 2018, Lacie Renee Stiffler
Recommended Citation
Stiffler, L.(2018). Implementation Costs of Spiking versus Rate-Based ANNs. (Doctoral dissertation). Retrieved from https://scholarcommons.sc.edu/etd/5028