Back to Results
First PageMeta Content
Cybernetics / Neuroscience / Backpropagation through time / Recurrent neural network / Speech recognition / N-gram / Backpropagation / Artificial neuron / Connectionism / Neural networks / Computational neuroscience / Science


Feed Forward Pre-training for Recurrent Neural Network Language Models Siva Reddy Gangireddy, Fergus McInnes and Steve Renals Centre for Speech Technology Research, University of Edinburgh, UK ,
Add to Reading List

Document Date: 2014-10-07 17:17:06


Open Document

File Size: 307,72 KB

Share Result on Facebook

Company

PPL / Wall Street Journal / Neural Networks / Penn Tree Bank / NVIDIA / GPU / Penn Treebank / /

Country

United States / /

/

Facility

University of Edinburgh / /

IndustryTerm

adaptive networks / speech recognition algorithms / real-time recurrent learning / recurrent networks / n-unit recurrent network / feedforward network / feed-forward network / learning algorithm / /

Organization

MIT / University of Edinburgh / Japan Science and Technology Agency / European Union / National Academy of Sciences / Institut f¨ur Informatik / Steve Renals Centre for Speech Technology Research / /

Person

Fergus McInnes / /

Position

fixed speaker / feed-forward / speaker / trained feed-forward / /

Product

GeForce GTX 690 GPUs / /

ProgrammingLanguage

Python / /

PublishedMedium

the Wall Street Journal / Journal of Machine Learning Research / Wall Street Journal / /

Technology

learning algorithm / two speech recognition algorithms / speech recognition / Neural Network / machine translation / speech recognition algorithms / Xeon E5645 processors / Machine Learning / speech recognition system / /

URL

www.idiap.ch/dataset/ami / http /

SocialTag