Ax
Ax copied to clipboard
How to manually change parameters in service API?
Parameters can be generated by
parameters, trial_index = ax_client.get_next_trial()
I encountered a case where I want to manually change the parameters. What is the procedure for me to do that? Thanks!
Thanks for the question @deng-cy! If you wanted to change the trial's parameters I would abandon that trial and attach a new one with the modified params, which would look something like:
parameters, trial_index = ax_client.get_next_trial() # your code
# you decide you want to change it
ax_client.abandon_trial(trial_index=trial_index, reason="wanted to change params (optional)")
new_parameters = parameters # modified `parameters`
parameters, trial_index = ax_client.attach_trial(parameters=new_parameters)
AxClient isn't particularly designed for human in the loop experiments, so the code isn't optimized for doing this. In the future we may release a batch client which is better designed to handle human in the loop as field experiments, human in the loop and batch trials tend to go together in our estimation.
I'm curious, what is your use case so we can better design to support it?
@danielcohenlive Thanks for your reply!
I am a little confused about the explanation of ABONDONED
status in https://ax.dev/api/core.html#ax.core.base_trial.TrialStatus. It says "... the trial should not be evaluated in the experiment again". This sentence seems to mean that BO will somehow avoid outputting the abandoned parameters. Is it true? If so, I shouldn't set the parameters to ABONDONED
since I don't want to pass any information to the optimizer regarding those parameters.
My case is a little hard to explain but I can rephrase it in a machine learning case. Let's say I want to train a deep network for 100 epochs while tuning the hyperparameter $x$ so that the loss is minimized after 100 epochs. What I want to do is to use a GP (or another probability model) to model the loss w.r.t. epoch (note that it's different from the GP in BO). Then, I could know the probability distribution of the loss at 100 epoch even if I haven't finished training it (say, I only trained for 70 epochs with hyperparameter $x_j$). I input this probability (corresponding to $x_j$) and other results (corresponding to other hyperparameters) to BO (i.e., Ax). If the next candidate is still $x_j$, I'll keep training it and update the results. If the next candidate from Ax is close to $x_j$ although not exact the same, I'll still proceed on $x_j$. Thus, I need to modify the parameters from Ax.
I think it's sort of a early stopping problem, but I don't think the early stopping interface is suitable for such a scenario, so I only wanted to use the BO part from Ax.
cc @danielrjiang regarding connections to early stopping.
@deng-cy Thanks for the explanation! Indeed, it is an early stopping problem and I believe you are interested in model-based early stopping, where the GP is jointly over (x, t). We are actually working on making this functionality available in Ax soon and it is currently being tested.
Note that we have an early stopping tutorial here that supports a basic, non-model-based setup: https://github.com/facebook/Ax/blob/main/tutorials/early_stopping/early_stopping.ipynb
For your exact problem, instead of abandoning, perhaps what you could do is to "artificially" complete the trial, using ax_client.complete_trial
with data from your epoch-based model at t=100
(and if your model gives uncertainty estimates, then you could send those to Ax as the sem). After that, you would be able to generate a new trial while making use of the predicted performance at t=100
. Finally, if you want to modify the parameters, you can use @danielcohenlive's suggestion above.
The tricky part here is that after the original trial finishes running, you'll want to go back and update the data using ax_client.udpate_trial_data
.
I'm going to close this if there are no further questions, but feel free to reopen it or create a new issue