Loading [MathJax]/extensions/MathMenu.js
A Fast Learning Algorithm for Deep Belief Nets | MIT Press Journals & Magazine | IEEE Xplore

A Fast Learning Algorithm for Deep Belief Nets

; ;

Abstract:

We show how to use “complementary priors” to eliminate the explaining-away effects thatmake inference difficult in densely connected belief nets that have many hidden lay...Show More

Abstract:

We show how to use “complementary priors” to eliminate the explaining-away effects thatmake inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of thewake-sleep algorithm. After fine-tuning, a networkwith three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to displaywhat the associativememory has in mind.
Published in: Neural Computation ( Volume: 18, Issue: 7, July 2006)
Page(s): 1527 - 1554
Date of Publication: July 2006
Print ISSN: 0899-7667
No metrics found for this document.

Usage
Select a Year
2025

View as

Total usage sinceJun 2014:111
01234JanFebMarAprMayJunJulAugSepOctNovDec300000000000
Year Total:3
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe