A team of researchers at Queen's University, in Canada, have recently proposed a new method to downsize random recurrent neural networks (rRNN), a class of artificial neural networks that is often used to make predictions from data. Their approach, presented in a paper pre-published on arXiv, allows developers to minimize the number of neurons in an rRNN's hidden layer, consequently enhancing its prediction performance.
* This article was originally published here