smt
smt copied to clipboard
How to reduce model training time
... model = KRG({}) model.set_training_values(inputsincolumns,outputsincolumns) model.train() ... #============ The amount of data:inputsincolumns(6 rows and 3000 columns),outputsincolumns(7 rows and 3000 columns) and the trainning takes almost 5mins Is there some way to increase training speed?
I suppose you meant 3000 rows with 6 or 7 columns as number of rows should be the same for inputs and outputs. You can try to remove some training samples and use them to validate the surrogate and see if it is good enough.
KRG can predict only 1 output, so if you have multiple outputs you have to train a surrogate for each output.
3000 points and 7 6-D model could be long. You could reduce Kriging option "n_start" but as the cost of decreased performances. https://smt.readthedocs.io/en/latest/_src_docs/surrogate_models/krg.html
Another tip (not recommended);
modify Cobyla niter options in krg_based (l.1070) :
limit, _rhobeg = 15* len(self.options["theta0"]), 0.5 to
limit, _rhobeg = 5* len(self.options["theta0"]), 0.5
Ideally, it would be great to fix TNC which is a gradient-based optimizer to train continuous models much faster #294 ...