training-operator
training-operator copied to clipboard
Update trainer to ensure type consistency for `train_args` and `lora_config`
What this PR does / why we need it:
Add data preprocessing for train_args
and lora_config
to ensure each parameter's type is consistent with the reference value. This will be necessary for developing the Katib tune
API to optimize hyperparameters.
Which issue(s) this PR fixes (optional, in Fixes #<issue number>, #<issue number>, ...
format, will close the issue(s) when PR gets merged):
Fixes #
Checklist: