You are on page 1of 4

Neural networks

Autoencoder - undercomplete vs. overcomplete hidden layer


th for my slides “Autoencoders”.
Hugo Larochelle Hugo Larochelle
2

épartement d’informatique
Université de Sherbrooke
AUTOENCODER
Département d’informatique
o.larochelle@usherbrooke.ca Université de Sherbrooke
Topics: autoencoder, encoder, decoder,
h(x)tied=weights
g(a(x))
hugo.larochelle@usherbrooke.ca
= sigm(b + Wx)
•October 17, 2012
Feed-forward neural network trained to reproduce its input at
the output layer October 16, 2012
Abstract Decoder
x c k
b = o(a
x b(x))
Abstract
W =W = sigm(c + W h(x)) ⇤
r my slides “Autoencoders”.
(tied weights)
for binary inputs
j P
h(x) = bg(a(x)) P
b l(f (x)) =
⌘x k (b
xk xk )2 l(f (x)) = k (xk log(b
xk ) + (1 xk ) log(1 bk ))
x
= sigm(b + Wx) Encoder
W
h(x) = g(a(x))
x = sigm(b + Wx)
b
x = o(b
a(x))
Hugo Larochelle
3
Département d’informatique
UNDERCOMPLETE HIDDEN LAYER
Université de Sherbrooke
hugo.larochelle@usherbrooke.ca
Topics: undercomplete representation
October 17, 2012
• Hidden layer is undercomplete if smaller than the input layer
‣ hidden layer ‘‘compresses’’ the input
Abstract
x c k
will compress well only for the

Math for my slides “Autoencoders”.
training distribution
W =W
• Hidden units will be (tied weights)

‣ good features for the h(x) = bg(a(x))


j
training distribution = sigm(b + Wx)
‣ but bad for other W
types of input
x
b
x = o(b
a(x))
= sigm(c + W⇤ h(x))
Hugo Larochelle
4
Département d’informatique
OVERCOMPLETE HIDDEN LAYER
Université de Sherbrooke
hugo.larochelle@usherbrooke.ca
Topics: overcomplete representation
October 17, 2012
• Hidden layer is overcomplete if greater than the input layer
‣ no compression in hidden layer
Abstract
x ck
‣ each hidden unit could copy a
ath for my slides “Autoencoders”.
different input component
W =W
• No guarantee that the (tied weights)
hidden units will extract h(x) = bj
g(a(x))
meaningful structure = sigm(b + Wx)
W

x
b
x = o(b
a(x))
= sigm(c + W⇤ h(x))

You might also like