Add argparse parametrization for the finetuning script
Similar to what is currently available in download_model.py, add Argparse with parameters in finetune_nli.py for parameters:
-
model_name, default 'models/scibert', typestr -
batch_size, default 64, typeint -
model_save_path, default 'models/scibert_nli', typestr -
num_epochs, default 2, typeint -
warmup_steps, default None, not required -
do_mean_poolingwithaction='store_true' -
do_cls_poolingwithaction='store_true' -
do_max_poolingwithaction='store_true'
Then:
-
Add check for only one of the pooling condition to be verified (raise an
AttributeErrorif more than one is). If none is specified, we use mean pooling strategy. -
Check if the
warmup_stepparameter is set before setting it to 10% of training: if it is, keep the user-defined value.