s3prl
s3prl copied to clipboard
Enable fp16 in downstream/runner.py
Feature request
Is your feature request related to a problem? Please describe. Downstream fine-tuning can run faster and use less GPU memory if fp16 is enabled in downstrean/runner.py. It is already implemented in upstream/runner.py.
It should not affect any existing code since fp16 is off by default in the downstream configs. Users would opt-in by setting runner.fp16: true in their downstream configs.
Describe the solution you'd like Mimic the implementation of mixed precision training from upstream/runner.py in downstream/runner.py
Describe alternatives you've considered None
Additional context I've implemented locally so I'm happy to send a PR if you would like.
Hello!
Sure! This sounds excellent. Could you send the PR? I can test and merge it today. Thanks!
I'm just curious. In your previous experiments, did you find this to have an evident impact on performance