dify
dify copied to clipboard
Llava chat with ollama not working
Self Checks
- [X] I have searched for existing issues search for existing issues, including closed ones.
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] Pleas do not modify this template :) and fill in all the required fields.
Dify version
9.5.8
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
I have literally reproduced your tutorial: https://docs.dify.ai/tutorials/model-configuration/ollama
my chat mode setting :
my completion mode setting:
✔️ Expected Behavior
when i use completion mode :
❌ Actual Behavior
When i use chat mode :
I tried increasing the size of the model context and changing other parameters but it didn't change anything
Hey @wrapss, great to see you back here! Hope you're doing well.
Based on the information you've provided, it seems like you're having trouble with the chat mode in Dify version 9.5.8. I understand that you've followed the tutorial and set up your model configuration as per the instructions, but you're not getting the expected behavior.
From the context provided, it appears that the chat mode and completion mode in Dify have different requirements and functionalities. The chat mode is specifically designed for conversational interactions requiring dialogue management, while the completion mode is focused on completing tasks or generating content based on inputs. Each mode has tailored input validation and error handling to suit its specific use case.
However, without the exact error message or behavior observed when trying to use the chat mode, it's challenging to pinpoint the exact issue. It would be helpful if you could provide more details about the problem you're experiencing. For instance, are you receiving any error messages? If so, what do they say? Or is the chat mode not responding at all?
In the meantime, you might want to double-check your model configuration against the specific requirements for the chat mode to function correctly in the Dify version 9.5.8. For instance, the mode
should be set to chat
, the context_size
should be 16384
, and the max_tokens
parameter should be between 1
and 16384
, among other settings.
If you've already checked these settings and are still experiencing issues, please provide more details about the problem, and I'll do my best to assist you further.
Sources
- api/controllers/web/completion.py
- api/core/model_runtime/model_providers/minimax/llm/abab5.5-chat.yaml
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Problem still present with llava:34b-v1.6 despite the use of an even larger context size
Maybe try reducing the Num Predict
model parameters a bit?
Hello everyone, I have a problem in my environment, as you can see in the screenshot, the information is not loading on the screen. I'm using a Mac, but I deployed with Docker, cloned the repo, accessed the /docker folder and ran the docker compose up -d command
Any suggestions?
com/langgenius/dify/assets/6585496/cbf1c000-3d49-4664-b546-f9248b8dbe4b)
Regarding Ollama, I installed it, it is running and I was able to add it to Dify without any problems using my network IP and the Ollama port.
![Captura de Tela 2024-03-16 às 10 19 08](https://github.
Also the option is not loading
The message error I got when I tried to save: An error occurred during credentials validation: Invalid URL 'api/generate': No scheme supplied. Perhaps you meant https://api/generate?