gateplugin-LearningFramework icon indicating copy to clipboard operation
gateplugin-LearningFramework copied to clipboard

On-demand caching/reusing of the training set for algorithm/hyperparameter exploration

Open johann-petrak opened this issue 6 years ago • 0 comments

It would be good to have some way to run the training PR on a cached training set, only changing the training algorithm or hyperparameters. This should work even for Mallet-based corpus representations.

This could be done by having a standard way to serialize the in-memory corpus, then have a PR which is not a language analyzer to run the training. This would essentially just ask for data directory, algorithm and algorithm parameters.

johann-petrak avatar Jul 05 '18 09:07 johann-petrak