Autoencoders are neural networks with same input and output. They are latent-factor models
Architecture:
- Includes a bottleneck layer: with dimension smaller than input .
 - First layers “encode” the input into bottleneck.
 - Last layers “decode” the bottleneck into a (hopefully valid) input
- Can be used as a generative model!
 
 
Applications
- Superresolution
 - Noise removal
 - Compression
 
Relationship to principal component analysis (PCA):
- With squared error and linear network (no non-linear ), equivalent to PCA.
 - Size of bottleneck layer gives number of latent factors in PCA.