Toby Roseman
Toby Roseman
@alealv - Your branch is a couple months old at this point. Please rebase your changes on the current tip of `main`. You probably want to squash all your changes...
@alealv - did you forget to `git push`? I'm still seeing the first commit in your branch that is not from you was from November 14th.
Your most recent commit is not the issue. The issue is that your first commit (950b1a06bf39e884ac73e0ae69539cb1ebed95b7) is committed on top of a `main` commit (7a0706281617d6398f2cf7a46f70a46909996710) which is from November 14....
This is better. However it's still not right. The most recent commit your branch has from `main` is now e1111237a43381263c72c78ae722ec2cf44513bb which is still about five weeks old. Perhaps your `upstream`...
You're branch looks good now. Thanks. Here is the CI run: https://gitlab.com/coremltools1/coremltools/-/pipelines/1127443513
@x51ming - Thanks for the code that clearly demonstrates a problem. I think your issue is different than the original problem in this GitHub issue. I've created a new GitHub...
[The previous comment](https://github.com/apple/coremltools/issues/1809#issuecomment-1574869255) by @fukatani is correct. Just doing `tuple(encoded_input.values())` does not give back the parameters in the wrong order. The correct order is `encoded_input['input_ids'], encoded_input['attention_mask'], encoded_input['token_type_ids']`. After fixing that,...
Here is corrected code: ```python import numpy as np import torch from transformers import AutoTokenizer, AutoModel import coremltools as ct sentences = ["This is a test."] tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model...
Loading untrusted `.pt` files is a security risk. Please share code which reproduces this issue but does not need `.pt` files. Maybe you could set a seed (`torch.manual_seed`) then call...
If the Tensors are too large to hard code, you could try to reproduce this issue with randomly generated tensors. First, set a random seed (`torch.manual_seed`). Then call `torch.randn` to...