enochlev
enochlev
In my cash account, I have the same problem. I even modified the tickerId to the optionId and it still returned an error. Currently I am just canceling and rebuying...
Where you able to get the data or an alternative dataset with the associated audio recordings?
Where you able to get the dataset or an alternative dataset?
Found it https://github.com/aws-samples/aws-lex-noisy-spoken-language-understanding
> Not sure how it works in the cli but in python you could do > > ``` > config = transformers.GPT2Config.from_pretrained('gpt2') > model = transformers.GPT2Model.from_pretrained(index_path,from_tf=True,config=config) > ``` > >...
Typically the GPT2Model.from_pretrained should point to a folder rather then a specific file. Try `model = transformers.GPT2Model.from_pretrained('/content/model.index',from_tf=True,config=config)` If that doesn't work print off your file structure in your **content** directory...
https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/rel-24-01.html 24.01 should have v0.7.1 tensort-llm compatibility be sure to have v0.7.2 tensrortLLM backend
@nv-guomingz correct me if I am wrong but the tensort-llm currently is only compatible with TensorRT-LLM v0.7.1? @mallorbc i got TensorRT-LLM v0.7.1 working with tensorrtllm_backend v0.7.2 with this docker run...
Heres a start ``` !pip install transformers import transformers config = transformers.GPT2Config.from_pretrained('/content/checkpoint/run3/hparams.json') tokenizer = transformers.GPT2Tokenizer("checkpoint/run3/encoder.json", "checkpoint/run3/vocab.bpe") model = transformers.GPT2Model.from_pretrained('/content/checkpoint/run3/model-1500.index',from_tf=True,config=config) model.save_pretrained('gpt2_') tokenizer.save_pretrained('gpt2_') from transformers import pipeline fill_masker = pipeline(task ='text-generation', model="/content/gpt2_")...
just to note... I am able to compile model successfully. Just curiouse if it has a performance impact