Benjamin Fineran

Results 30 comments of Benjamin Fineran

Hi @OriAlpha this error is occurring because `t5` models are incompatible with loading through `transformers. AutoModelForSequenceClassification`. Support for generation models from tranformers auto models is not yet added to sparseml....

the sample inputs provided to the onnx export trace must have all the inputs required to run the `t5` model. it looks like `decoder_input_ids` may not be provided from the...

sure, after `inputs` are defined in `export.py`, you can add in ```python import torch inputs["decoder_input_ids"] = torch.ones(1, sequence_length, dtype=torch.long) ```

LGTM pending investigation of failing integration tests

Hi @SolomidHero, this API was recently migrated and moved to the top level, please install `sparsezoo-nightly` or clone and install from source to get this path working!

@dbogunowicz looks like there's still some conflicts/unintended changes. can you take a look?

Hi @LamnouarMohamed you can either unselect performance profiling, or `pip install deepsparse` and restart the sparsify application

Hi @miladlink could you provide the command used to run the export command? It looks like the model is not getting quantized before weight reload, passing the recipe you used...

> > do we have a way to auto-manually update the json test files? (thinking something like we have in place here > > currently no, i'd probably prefer to...

Hi @akashAD98 thank you for reporting this issue. To help us dive in, could you paste the results from running `$ deepsparse.check_hardware` to get more information about your CPU? For...