Support multiple ML models
Seems like we'll have at least two more models added, and one of them supports several different model types (rnn, gpt, vae, etc).
We need to think on how to schedule trainings for those. I don't want to introduce a new cron for every single type, but at the same time we can't afford training all of them at the same time -- or maybe we can? The models will be updated less often, but that's fine. And for the initial deployment we may temporarily create more GPU bots.
Similar problem on the generation side. This should be easier, as we can have multiple strategies instead of the single ML_RNN that we have right now.
@mbarbella-chromium Marty do you want to own this? Once GradientFuzz starts working, it might make sense to figure out a good solution for adding any new models in future.
Yeah, that makes sense to me.