persephone
persephone copied to clipboard
Experiment with different batch sizes on CPU machines to arrive at a default for the docker image
I'm not entirely sure what this issue is referring to. What's the requirements here?
The example code should use hyperparameters that let the models be trained on people's personal computers. Increasing batch size in training might make training run faster but would use more memory. It'd be good to find the right tradeoff.