RecBole
RecBole copied to clipboard
Hyperparameter tuning
Hi,
Thank you for your work. Just discovered the library and it seems very good. I have a problem with hyperparameter tuning however. It really isn't clear how it works.
Main question :
- It is not clear what the default objective_function does and how to easily customize it. How do I define metrics to choose best model ? How do I display other metrics ? How do I disable saving model at each iteration ?
Other questions :
- When using a uniform/log-uniform range, how many samples are drawn ?
- How can I do a random search if I always have to specify values in params_file ?
- The loading bar doesn't progress smoothly during the searches. Is it due to parallelization of the search ?
- Is there an API to get a chosen model's parameters like in sklearn ?
I think the documentation should be clearer on this point which is central to any application. Any help would be greatly appreciated. Thanks !
Alright, I just noticed that there is a dedicated Tuning hyper-parameters section for each model in the Model Introduction section ! This helped a lot.
You guys should perhaps reference it in the Use Case -> Parameter Tuning section.
@Koowah Hello. thanks for your attention to RecBole! Parameter Tuning section is about the general introduction of hyperparameter tuning, and the dedicated Tuning hyper-parameters section for each model in the Model Introduction section provides concrete parameter ranges for the hyper-parameters of each model. Are you still confused after reading these documents? If you have any questions, you are welcome to discuss with us.
@Wicknight Hi ! Thanks for your quick answer 😄 I read these documents but still have a few questions :
- How can I display the ranking metrics when tuning a context-aware recommender ? Right now, it only shows logloss and auc.
- Is it possible to perform a random parameter search ?
- When using a uniform/log-uniform range, how many samples are drawn ?
@Koowah
- Value-based metrics and ranking-based metrics are incompatible, so only one of them can be displayed at one time. You can set the evaluation metrics you want to select through the 'metrics' parameter. It is worth noting that for the context-aware model, if you want to use ranking-based evaluation metrics for evaluation, you need to set the evaluation mode to full.
- In the upcoming new version of RecBole, we will support a variety of hyper tuning strategies. If you want to use random search in the current version of RecBole, you can convey the externally implemented random search method through ’algo‘ parameter, just like this:
from hyperopt import rand
hp = HyperTuning(
objective_function,
algo=rand.suggest,
max_evals=100,
params_file=args.params_file,
fixed_config_file_list=config_file_list,
)
hp.run()
- In the hyperopt library, these two methods actually just get one sample from this range. Maybe you can try adding a loop to meet your needs.
@Wicknight
Ok thanks. I will give it a try and notify you.
Since there is no new reply for a long time, the issue has been closed. If you have any questions, please feel free to comment.
Dear @Wicknight @Sherry-XLL,
Thank you very much for your work and for providing support!
I would like to further ask - based on the documentation it is unclear to me which metric is used for hyperparameter tuning. Is it the metric specified in the valid_metric field of the config file?