Deep Learning Binary Neural Network on an FPGA
PubblicoContenuto scaricabile
open in viewerIn recent years, deep neural networks have attracted lots of attentions in the field of computer vision and artificial intelligence. Convolutional neural network exploits spatial correlations in an input image by performing convolution operations in local receptive fields. When compared with fully connected neural networks, convolutional neural networks have fewer weights and are faster to train. Many research works have been conducted to further reduce computational complexity and memory requirements of convolutional neural networks, to make it applicable to low-power embedded applications. This thesis focuses on a special class of convolutional neural network with only binary weights and activations, referred as binary neural networks. Weights and activations for convolutional and fully connected layers are binarized to take only two values, +1 and -1. Therefore, the computations and memory requirement have been reduced significantly. The proposed architecture of binary neural networks has been implemented on an FPGA as a real time, high speed, low power computer vision platform. Only on-chip memories are utilized in the FPGA design. The FPGA implementation is evaluated using the CIFAR-10 benchmark and achieved a processing speed of 332,164 images per second for CIFAR-10 dataset with classification accuracy of about 86.06%.
- Creator
- Contributori
- Degree
- Unit
- Publisher
- Language
- English
- Identifier
- etd-042717-145953
- Parola chiave
- Advisor
- Committee
- Defense date
- Year
- 2017
- Date created
- 2017-04-27
- Resource type
- Rights statement
Relazioni
- In Collection:
Articoli
Permanent link to this page: https://digital.wpi.edu/show/ks65hc36q