t2t-tuner
t2t-tuner copied to clipboard
Soft prompts for FP16/mixed precision
Thanks for the handy library!
I'm trying to add support for soft-prompt tuning in FP16, but I'm running into the following error:
RuntimeError: hook 'freezing_hook_weight' has changed the type of value (was torch.cuda.HalfTensor got torch.cuda.FloatTensor)
Is this expected?