benchmark_VAE
benchmark_VAE copied to clipboard
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Hi Clément: Great work on introducing the VAE-oriented library! You have made it more modular like predefined models, pipelines, and so forth. Can you share brief details on how the...
### Discussed in https://github.com/clementchadebec/benchmark_VAE/discussions/39 Originally posted by **osbm** July 31, 2022 Let's say I want to use encode SMILES to latent space. Is it possible to use a sequential algorithm...
It would be great to see the Poincare VAE (or a similar hyperbolic geometry VAE) implemented in pythae! Paper: https://arxiv.org/abs/1901.06033 Code: https://github.com/emilemathieu/pvae
Some models make use of some sort of scheduling or annealing internally (e.g. KL warmup or temperature annealing) based on the current step index - what's the correct way to...
Hello, Thank you for your excellent work! As my question states, I wonder how to use this library with a custom dataset. I am new to machine learning and wanted...
**Feature request** It would be nice to see this variant (https://arxiv.org/abs/2205.07547) of the SQ-VAE implemented in the library.
**Is your feature request related to a problem? Please describe.** As I train models, I would like to easily be able to share them with other people and document them...
Hi, thanks for the great work. I would like to know how to evaluate the generation performance of models. Specifically, I am interested in how to calculate FID and other...
I was wondering if it is possible for you to add posterior sampling to PVAE model. The problem with prior sampling is that the embedding space could be sparse and...
Hi: I want to get the train log, train loss plot, val loss plot and lr plot using WandB callback. But it seems that I just can get PART of...