modAL icon indicating copy to clipboard operation
modAL copied to clipboard

Does it make sense to optimize Gaussian Process hyperparameters during active learning?

Open tumble-weed opened this issue 5 years ago • 2 comments

Hi,

In the active learning for regression example, we have used gaussian processes. While the sklearn version seems to keep its length scale and noise parameters static ( maybe i am doing something wrong), other implementations allow for modifying these using gradient descent ( for e.g. gpytorch).

Under batch learning circumstances we would have the whole train set , and tuning the hyperparameters to maximize log likelihood makes sense, but does it also make sense to do so while performing active learning, and having datasets of really small sizes?

tumble-weed avatar Nov 02 '20 01:11 tumble-weed

Hi!

I have absolutely zero knowledge regarding this question :) I don't want to state something which is false so I rather not give any advice here. Instead, I'll keep this issue open, hoping that someone will know the answer.

cosmic-cortex avatar Nov 09 '20 06:11 cosmic-cortex

ok thanks.

On Mon, Nov 9, 2020 at 12:16 PM Tivadar Danka [email protected] wrote:

Hi!

I have absolutely zero knowledge regarding this question :) I don't want to state something which is false so I rather not give any advice here. Instead, I'll keep this issue open, hoping that someone will know the answer.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/modAL-python/modAL/issues/107#issuecomment-723798942, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABR2NXP6F7H7AGX53USWGXTSO6F4LANCNFSM4TG4F4ZA .

tumble-weed avatar Nov 09 '20 14:11 tumble-weed