Thiago Crepaldi
Thiago Crepaldi
> > Thank you @amyeroberts. Is this valid for any transformers model? > > Yes, this is for all transformers models. > > > is there a way the model...
> @thiagocrepaldi I don't understand - if you're using transformers models at some point in the pipeline it must be explicit that transformers is being used? Can't you just add...
> Hi @thiagocrepaldi, thanks for raising this issue! > > I'm going to cc in @Narsil, the king of safetensors here. > > If you want to be able to...
Could you provide a full repro? the provided snippet is not enough to copy-paste-run :) In fact, there is not a model in the bug report to debug
> I found the issue. It seems that whenever I have used int() for casting my variable to integer (for example in int(a/b)). it is casted to float in onnx...
Closing due to lack of repro
@pytorchbot merge
@pytorchbot merge
Thanks @sayakpaul Does this mean the `load_checkpoint_and_dispatch` API is not compatible with HF's model that are loaded using `from_pretrained` when `low_cpu_mem_usage=False` in general or this is a SDXL limitation? Looking...
This doesn't seem to be a converter error, but a runtime one