Deep-Kernel-GP
Deep-Kernel-GP copied to clipboard
Optimisation does not converge
Could you please make a comment as to why there is no convergence in example.py (1d)? Yet, the end GP is a fairly good approximation.
Thank you!
Epoch 9988: 0 %. Loss: -20.101368667360404 Epoch 9989: 0 %. Loss: -38.94774396791739 Epoch 9990: 0 %. Loss: -39.56470825273037 Epoch 9991: 0 %. Loss: -12.318895364429565 Epoch 9992: 0 %. Loss: -19.47904221404525 Epoch 9993: 0 %. Loss: -36.77490064730489 Epoch 9994: 0 %. Loss: 43.20385940066373 Epoch 9995: 0 %. Loss: 20.232737011431396 Epoch 9996: 0 %. Loss: -34.98050499495804 Epoch 9997: 0 %. Loss: -32.932921142632466 Epoch 9998: 0 %. Loss: 739.1653327084057 Epoch 9999: 0 %. Loss: -39.727655479961854 Epoch 10000: 0 %. Loss: -35.62156909134845
because its using the Adam Optimizer. Using a lower learning rate, or different optimizer should do the trick (but might give overfit)