CTGAN icon indicating copy to clipboard operation
CTGAN copied to clipboard

Gaussian mixture params/training sample size

Open kevinykuo opened this issue 5 years ago • 2 comments

Any tips on optimizing performance/training time for the Bayesian Gaussian mixture training phase? Could we consider exposing the parameters and perhaps include sampling the training set? This piece doesn't seem to scale well to bigger datasets.

kevinykuo avatar Nov 14 '19 05:11 kevinykuo

@kevinykuo I totally agree on exposing as many hyperparameters as possible, but I have some doubts about the subsampling, since this is something that could be easily done by the user outside of CTGAN.

Would you mind editing the issue title and description to make this one focus only on the GM Hyperparams, so we can start working on it right away, and opening another one to discuss the subsampling separately? I can also do the edits myself, if you prefer. Let me know!

csala avatar Dec 19 '19 19:12 csala

Currently the user doesn't have control over subsampling for VGM training, since it's hardcoded in https://github.com/DAI-Lab/CTGAN/blob/fd507166f132381bc62b60f6457028f5bcaa904c/ctgan/synthesizer.py#L114-L115

To clarify, the non-scalable piece is the BayesianGaussianMixture, so I'm proposing to subsample the data there for training data transformation, but when we train the actual GAN we still use the full training data.

These hyperparameters seem related enough to be tracked together, but feel free to organize as you see fit!

kevinykuo avatar Dec 19 '19 21:12 kevinykuo