Stanford University / Nanyang Technological University / Thesis Harvard University / Prentice Hall / /
IndustryTerm
autoassociative network / large-scale social network services / multi-layer perceptron networks / approximate minimum mean square error solution / layer network / Can threshold networks / simpler training algorithm / much simpler algorithm / signal processing / complicated training algorithm / real-time dynamic security assessment / trigonometric networks / Parallel distributed processing / image processing / approximation using incremental networks / Real-time learning capability / autoassociative networks / threshold networks / control systems / humanlike cognitive memory systems / network computing / power systems / trained autoassociative network / twolayer network / learning algorithm / /
MarketIndex
set 20 / /
Organization
Harvard University / MIT / School of Electrical and Electronic Engineering / Nanyang Technological University / Singapore / Stanford University / Department of Defense / Department of Electrical Engineering / /
Person
Simon Haykin / Meng-Hiot Lim / Aaron Greenblatt / Bernard Widrow / / /
Position
layer feed-forward / general network concept / probablistic model for information storage / Corresponding author / /
ProgrammingLanguage
MATLAB / /
ProvinceOrState
New Jersey / South Dakota / California / Massachusetts / /
PublishedMedium
Psychological Review / /
SportsLeague
Stanford University / /
Technology
Neuroscience / Conclusions The No-Prop algorithm / simpler training algorithm / training algorithms / No-Prop algorithm / Back-Propagation algorithm / image processing / two algorithms / LMS algorithm / neural network / fine algorithm / artificial intelligence / complicated training algorithm / Back-Prop algorithm / solidly working algorithm / Training algorithm / Back-Prop training algorithm / The No-Prop algorithm / much simpler algorithm / simulation / Back-Propagation algorithm The Back-Propagation algorithm / No-Prop algorithms / be done using the LMS algorithm / /