In recent years, researchers have proposed a wide variety of hardware implementations for feed-forward artificial neural networks. These implementations include three key components: a dot-product engine that can compute convolution and fully-connected layer operations, memory elements to store intermediate inter and intra-layer results, and other components that can compute non-linear activation functions.
* This article was originally published here