modAL
modAL copied to clipboard
Does it make sense to optimize Gaussian Process hyperparameters during active learning?
Hi,
In the active learning for regression example, we have used gaussian processes. While the sklearn version seems to keep its length scale and noise parameters static ( maybe i am doing something wrong), other implementations allow for modifying these using gradient descent ( for e.g. gpytorch).
Under batch learning circumstances we would have the whole train set , and tuning the hyperparameters to maximize log likelihood makes sense, but does it also make sense to do so while performing active learning, and having datasets of really small sizes?
Hi!
I have absolutely zero knowledge regarding this question :) I don't want to state something which is false so I rather not give any advice here. Instead, I'll keep this issue open, hoping that someone will know the answer.
ok thanks.
On Mon, Nov 9, 2020 at 12:16 PM Tivadar Danka [email protected] wrote:
Hi!
I have absolutely zero knowledge regarding this question :) I don't want to state something which is false so I rather not give any advice here. Instead, I'll keep this issue open, hoping that someone will know the answer.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/modAL-python/modAL/issues/107#issuecomment-723798942, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABR2NXP6F7H7AGX53USWGXTSO6F4LANCNFSM4TG4F4ZA .