insanely-fast-whisper
insanely-fast-whisper copied to clipboard
torch_dtype only for torch.float16?
Does inference currently only support torch_dtype=torch.float16?int8_float16、int8 will be supported?