text-generation-inference
text-generation-inference copied to clipboard
text-gereneration-inference:0.9 version create `weired model.safetensors`
System Info
From upgrading the docker version from 0.8 to 0.9, I noticed that the GPT model started generating meaningless tokens. After conducting several tests, I discovered that the model.safetensors created from the 0.9 version failed to function properly.
Has anyone else encountered a similar issue?
Information
- [X] Docker
- [ ] The CLI directly
Tasks
- [ ] An officially supported command
- [ ] My own modifications
Reproduction
- I used bbpe tokenizer and GPT model (working on 0.8)
- upgrade docker image from 0.8 to 0.9
Expected behavior
0.9 image version must work as 0.8
Do you mind sharing which model you're talking about ? Currently we cannot reproduce anything so it's really hard to understand what's the problem.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.