whitebox
whitebox copied to clipboard
Hyper parameters tuning for model
The hyper-parameters tuning is pretty easy to be performed using Gridsearch.
There are some questions thought:
- Do you believe that there is a time threshold (e.g. not take more than 20 seconds)?
- Do you believe that we have to set an evaluation metric threshold(e.g. if model achieves 90% accuracy, pick that model)?
- The model training will be performed once per training set. Means that we will retrain the model only if the training set changes. Do we need to keep track of models (e.g. by using MLflow?)
- Regardless of whether we use MLflow or not, do we need to save somewhere the hyper-parameters of the optimal model?
cc: @momegas , @gcharis , @stavrostheocharis
Do you believe that there is a time threshold (e.g. not take more than 20 seconds)?
- Depends on when this pipeline runs. If it is in almost real-time, I think that we should have a threshold. If it runs based on a scheduler it is not a problem
Do you believe that we have to set an evaluation metric threshold(e.g. if model achieves 90% accuracy, pick that model)?
- Maybe just pick the one with the highest accuracy. But here what happens if we have a poor model as the best model?
The model training will be performed once per training set. Means that we will retrain the model only if the training set changes. Do we need to keep track of models (e.g. by using MLflow?)
- If we are going to keep the model, a solution like this could be implemented, but MLflow will need much effort to integrate it (database, paths, deployment, etc.)
Regardless of whether we use MLflow or not, do we need to save somewhere the hyper-parameters of the optimal model?
- I think that this would be good to save them and maybe also keep the eval metrics to show something to the user, in order that he knows exactly how precise is our explanation.
I think its important to keep the target of Whitebox in mind. The target is monitoring not create models (at least not now) With this in mind, I think that we should either have a quick tuning or not at all. How I understood this issue was that it would be just some adjustments on the training. Not create a full other feature.
Think about this and if we can have just this in the timebox we have good. Otherwise, I would look at something else.
Having some discussions with @stavrostheocharis , we concluded that the requirements of this task are still pretty blurry. I will try to simplify them with some simple questions below, so please @momegas - when you have the time, let us know.
- Do we wish to have some possibilities for a better model - predicting more accurate results? This means more accuracy during the explanability also.
- If no, we can close the ticket. If yes, how much time do we wish to sacrifice for performing the fine tuning searching for the best model - here could help also a metric threshold. For instance if we say to the model to iterate through 20 different combinations of hyper-parameters, in case of achieving an acceptable performance even in the 1st iteration, stop there and consider this as the best model.
- Do we wish in some way to keep track of the best hyper-parameters?
I think we should not spend more time on this as a better model will not give much value to WB at the moment since we are missing more core features. Feel free to close this if needed @NickNtamp
Sure I can close the ticket @momegas . Before do it, I want to remind to both you and @stavrostheocharis that by not exploring combinations in order to increase the possibility of building a better model in an unknown dataset we accept the high risk of explaining a trash-model. Just imagine that we could build a model that has an accuracy of 20% and we will use it for our explainability feature.
I would keep this as an issue in the backlog, in order to further investigate it and implement an enhancement in the future
It was actually requested! You are right. I will re-open this.
We should explore alternatives like https://optuna.org/ here.