ChatGLM-6B
ChatGLM-6B copied to clipboard
I am using this for Question Answering, where I provide it the context.
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
Sometimes it answers out of the given context/text. I want it to answer from the text. Also it gives wrong answers for something, say for example if in text its mentioned a person can avail a certain policy , it pulls that text and says it cant. Not sure how to prompt it better.
Expected Behavior
Should give more factual answers I feel.
Steps To Reproduce
Give it a large context with certain facts and ask it question from those.
Environment
- OS: Linux
- Python: 3.10
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :
Using colab environment
Anything else?
Nothing
In Chinese version of README, there is a "Limitations" section.
IMHO, these should be the common limitations in every LLM deployable in a customer graphics card.