A team of researchers at Queen's University, in Canada, have recently proposed a new method to downsize random recurrent neural networks (rRNN), a class of artificial neural networks that is often used to make predictions from data. Their approach, presented in a paper pre-published on arXiv, allows developers to minimize the number of neurons in an rRNN's hidden layer, consequently enhancing its prediction performance.
* This article was originally published here
This Blog Is Powered By Life Technology™. Visit Life Technology™ At www.lifetechnology.com Subscribe To This Blog Via Feedburner / Atom 1.0 / RSS 2.0.