Reconstruction of MINST digits during VAE training using Convoluation/Deconvolution Neural Network, 50 latent dimensions, 2 epochs shown
A library using Julia's Flux Library to implement Variational Autoencoders
- main.jl - run model with MINST dataset, this will be dropped later
- Model.jl the basic Model, for now it's just a basic VAE
- Dataset.jl the interface
- Can KL-Divergence and reconstruction error be better balanced?
- Can VAE be used as a pure clustering method? How else would the latent space representation be useful?
- Is it possible (in Julia) to reconconstruct the reverse transformation (decoder), for a given encoder?
- Can VAE be used for columnar data with missing inputs?
Tutorial on VAE
Tensorflow VAE
Flux.jl
Flux VAE
Auto-Encoding Variational Bayes
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
Conv/Deconv VAE during 4 epochs of MINST training. 10 latent dimensions. MINST digits choosen at random from test set.