ollama
                                
                                 ollama copied to clipboard
                                
                                    ollama copied to clipboard
                            
                            
                            
                        [Issue] using gemma model as a chatbot
I was using mistral model for my PDF chatbot. With the arrival of gemma model, I am trying to use this model. But it gives me an issue: After embedding external PDF document, when I ask question, it always gives me a response that it is not able to provide any information about the provided context.
Example of an issue:
If I uploaded ssl cookbook document, I ask a question: What is SSL?
In return the chatbot answers me with: The context does not provide any information about what SSL is, so I cannot answer this question from the provided context.
Tech stack involved
- Using gemma:2b model. Also tried using gemma:7b (Will not use since this is running slow in local).
- Using Xenova/all-MiniLM-L6-v2embedding model from@xenova/transformerspackage.
- Using Langchain.
- Using Chroma as vectorstore.
Reproduce
It is a next.js application using langchain, chroma and transfomers.js.
- Clone this repo: https://github.com/cosmo3769/PDFChatter/tree/gemma-model
- Follow README.mdsetup guide.
The same code works for mistral and llama2:7b-chat but fails to work when using gemma:2b or gemma:7b. Any specific tweaks needed for this?
@jmorganca @mxyng
have you tried gemma:2b-instruct?
I have a related question https://github.com/ollama/ollama/issues/2743
I think ollama only provides gemma:2b and gemma:7b for now. Please correct me if I am wrong.
https://ollama.com/library/gemma/tags
Ohh how could I miss this. Well, I will give this a try and see if it works. Thanks!
btw, maybe you can post you code for your retrieval or chain.
The code for chain is included in worker.ts file under app directory.
I think ollama only provides gemma:2b and gemma:7b for now. Please correct me if I am wrong.
upgrade ollama version 0.1.27, again try ollama run gemma:7b ok
I think ollama only provides gemma:2b and gemma:7b for now. Please correct me if I am wrong.
upgrade ollama version 0.1.27, again try ollama run gemma:7b ok
I have the same question. I keep asking him, but he keeps saying that the information provided by the context cannot be used. I used llama2 before, but my version is the latest 0.1.27
I was using
mistralmodel for my PDF chatbot. With the arrival of gemma model, I am trying to use this model. But it gives me an issue: After embedding external PDF document, when I ask question, it always gives me a response that it is not able to provide any information about the provided context.Example of an issue:
If I uploaded
ssl cookbookdocument, I ask a question:What is SSL?In return the chatbot answers me with:The context does not provide any information about what SSL is, so I cannot answer this question from the provided context.Tech stack involved
- Using gemma:2b model. Also tried using gemma:7b (Will not use since this is running slow in local).
- Using
Xenova/all-MiniLM-L6-v2embedding model from@xenova/transformerspackage.- Using Langchain.
- Using Chroma as vectorstore.
Reproduce
It is a next.js application using langchain, chroma and transfomers.js.
- Clone this repo:
https://github.com/cosmo3769/PDFChatter/tree/gemma-model- Follow
README.mdsetup guide.The same code works for
mistralandllama2:7b-chatbut fails to work when usinggemma:2borgemma:7b. Any specific tweaks needed for this?@jmorganca @mxyng I have the same question. I keep asking him, but he keeps saying that the information provided by the context cannot be used. I used llama2 before, but my version is the latest 0.1.27
bro ,Have you found a solution?
@Felictycf I have not yet dived deep. I will get back to this in some time.
@mvpbang I am using latest version of ollama. Thank you.
I had issues with langchain, so made direct call to ollama api, but gemma2b has issues with chat, can someone help me on this ?
{ "model": "gemma:2b-instruct", "messages": [ { "role": "user", "content": "anything cheaper then that?" }, { "role": "assistant", "content": "The cheapest product by the context is the xyz with an offer price of 39.99." }, { "role": "user", "content": "give me chat summary till now" } ], "stream": false }
Response
{ "model": "gemma:2b-instruct", "created_at": "2024-02-28T05:09:01.385491Z", "message": { "role": "assistant", "content": "I am unable to provide a chat summary at this time. I do not have access to external chat data." }, "done": true, "total_duration": 651636333, "load_duration": 830500, "prompt_eval_duration": 103229000, "eval_count": 23, "eval_duration": 546308000 }
This works with gemma:7b model. I have tried different things, looks like it does not understand the conversation history.
Sorry for the slow response on this, peeps.
@cosmo3769 I think your question got answered before?
@epratik the Ollama API is stateless, so you'll have to keep track of the context yourself when you're calling it (this is both with the /api/generate endpoint as well as the /api/chat endpoint). The chat endpoint is probably easier because you can just maintain a list of messages for the context.
I'm going to go ahead and close the issue.