Back to Results
First PageMeta Content



Scaling Up Natural Gradient by Sparsely Factorizing the Inverse Fisher Matrix Roger B. Grosse Ruslan Salakhutdinov Department of Computer Science, University of Toronto
Add to Reading List

Document Date: 2015-06-03 15:15:50


Open Document

File Size: 520,76 KB

Share Result on Facebook

City

Courville / TORONTO / Lille / /

Company

Oxford University Press / Neural Information Processing Systems / Neural Networks / MIT Press / Nh Nv / GPU / Samsung / Curran Associates Inc. / /

Country

Tonga / France / Jordan / /

Facility

University of Toronto / /

IndustryTerm

online fashion / deep belief networks / optimization algorithms / optimization algorithm / recurrent networks / online learning / implicit matrix-vector products / sparse matrix-vector product / matrix-vector products / computing / natural gradient algorithm / linear systems / curvature matrix-vector products / representation learning algorithms / dynamical systems / large linear systems / approximately solving large linear systems / /

NaturalFeature

Lake et al. / /

Organization

University of Toronto Abstract Second-order / MIT / Oxford University / Inverse Fisher Matrix Roger B. Grosse Ruslan Salakhutdinov Department of Computer Science / University of Toronto / /

Person

James Martens / Y. Adaptive / Michael I. Graphical / Geoffrey Hinton / Yuri Burda / Ruslan / Murray / /

Position

author / Singer / Inverse Fisher / training feed-forward / Fisher / unstructured Fisher / /

ProgrammingLanguage

Python / /

PublishedMedium

Mathematics of Computation / Machine Learning / Journal of Computational Physics / Journal of Machine Learning Research / /

Technology

representation learning algorithms / second-order optimization algorithm / Machine Learning / online natural gradient algorithm / second-order optimization algorithms / DBMs / /