moose
moose copied to clipboard
Add ADAM optimization in STM for Gaussian Process training
Reason
Currently Gaussian Process (GP) training utilizes (mostly) CG optimization through TAO. Adding ADAM optimization since its popular for GP training and permits a stochastic selection of the training data at each iteration reducing the training cost. Training cost for GP is O(N^3) with N as the size of training data.
Design
ADAM optimization option will be added to the GaussianProcessHandler
class in STM.
Impact
No anticipated impacts to existing objects.