dify icon indicating copy to clipboard operation
dify copied to clipboard

Missing models and unresponsive model addition

Open haofengW opened this issue 8 months ago • 9 comments

Self Checks

  • [x] This is only for bug report, if you would like to ask a question, please head to Discussions.
  • [x] I have searched for existing issues search for existing issues, including closed ones.
  • [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [x] Please do not modify this template :) and fill in all the required fields.

Dify version

1.2.0

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

  1. First, back up the volumes.
  2. Pull the latest code from Git.
  3. Overwrite the new volumes directory with the backup.
  4. Start Docker.

✔️ Expected Behavior

Models are displayed normally, and new models can be added properly.

❌ Actual Behavior

After upgrading from version 1.0.1 to 1.2.0, all configured models in Ollama are missing, and attempting to re-add them causes the entire service to freeze.

haofengW avatar Apr 22 '25 00:04 haofengW

Hello @haofengW! I'm Dosu, a bot here to help you solve bugs, answer questions, and assist you in becoming a contributor. While you wait for a human maintainer, feel free to let me know how I can help!

I found a similar unsolved discussion titled Error when adding ollama model, which might be relevant to your issue. Unfortunately, no solution was found in the comments [1].

For integrating Ollama models in Dify version 1.2.0, ensure the following configurations are correctly set:

  1. Download and Launch Ollama:

    • Download the Ollama client from https://ollama.com/download.
    • Run the Ollama client using the command: ollama run llama3.2.
    • Ensure the Ollama API service is accessible at http://localhost:11434.
  2. Integration in Dify:

    • Navigate to Settings > Model Providers > Ollama in Dify.
    • Configure the following settings:
      • Model Name: llama3.2
      • Base URL: Use http://localhost:11434 for local source code deployment, or http://<your-ollama-endpoint-domain>:11434 if using Docker.
      • Model Type: Chat
      • Model Context Length: 4096
      • Maximum Token Limit: 4096
      • Support for Vision: Yes (if the model supports image understanding).
  3. Environment Variables:

    • If using macOS, set environment variables using launchctl:
      launchctl setenv OLLAMA_HOST "0.0.0.0"
      
    • Restart the Ollama application.
  4. Docker Configuration:

    • Ensure the Ollama service is accessible from the network by using host.docker.internal or the local network IP address instead of localhost.
  5. Additional Configuration for Linux and Windows:

    • On Linux, if Ollama is running as a systemd service, edit the service to include environment variables:
      [Service]
      Environment="OLLAMA_HOST=0.0.0.0"
      
      Reload systemd and restart Ollama:
      systemctl daemon-reload
      systemctl restart ollama
      
    • On Windows, set environment variables through the system settings and restart Ollama.

These configurations are necessary to ensure proper integration and functionality of Ollama models within Dify [2][3]. If you encounter connection errors when using Docker, ensure the correct network address is used to avoid connection issues.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

dosubot[bot] avatar Apr 22 '25 00:04 dosubot[bot]

Please tell me how to do it. Thank you Image

haofengW avatar Apr 22 '25 05:04 haofengW

To resolve the timeout error for a POST request to a localhost URL when configuring a model provider in a web application interface, you can try the following steps:

  1. Add the following environment variables to your .env file to increase the timeout and use a reliable Python package mirror:

    PLUGIN_PYTHON_ENV_INIT_TIMEOUT=720
    PIP_MIRROR_URL=https://mirrors.aliyun.com/pypi/simple
    
  2. Restart your Docker containers using these commands:

    docker compose down && docker compose up -d
    
  3. If the issue persists, and you are using Dify 1.0.0, you can try downgrading the plugin version. Access the plugin settings in the upper right corner of the web interface, select the Tongyi plugin, and downgrade the plugin version from 0.0.5 to 0.0.4. This solution has been confirmed to resolve similar timeout errors when configuring model providers like Volcengine.

  4. Additionally, if the timeout is due to a hardcoded setting in the model provider integration code, you may need to modify the relevant Python file in the api/core/model_runtime/model_providers directory. Look for a file related to your specific model provider and adjust the hardcoded timeout value to a higher number to accommodate longer processing times [1][2][3][4].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

dosubot[bot] avatar Apr 22 '25 05:04 dosubot[bot]

@crazywoola I just upgraded to the latest version. If it's not a bug, why did the local configuration not show up in the model list in Ollama originally, while the newly added ones keep timing out? Please tell me what to do. Thank you.

haofengW avatar Apr 22 '25 09:04 haofengW

Estou com o mesmo problema aqui. Ao adicionar o codigo de API e clicar em salvar simplesmente fico sem resposta.

brmatheusluiz avatar Apr 22 '25 17:04 brmatheusluiz

the same problem in version1.0.0, 1.2.0 and 1.3.0. After update/restart the Dify services, this problem will appeared again. But I was tested this problem, after update/restart, wait about 2 hours or more, Ollama model list will be appeared again, but it wasted time a lot, hope to fix it.

Clubiy avatar Apr 24 '25 01:04 Clubiy

https://github.com/user-attachments/assets/4950dc31-8fe7-419e-822b-be883b6eadc1

It took some time, but it's added successfully.

crazywoola avatar Apr 25 '25 07:04 crazywoola

2025-04-25.15.04.24.mov

It took some time, but it's added successfully.

I'm really glad to see that you succeeded, but mine has been unsuccessful all along and timed out eventually, as shown in the previous picture. Is it necessary to perform special settings when upgrading from version 1.0.1 to 1.2.0? Otherwise, the already configured model list won't be automatically synchronized, and it also prevents the addition of models.

haofengW avatar Apr 25 '25 11:04 haofengW

I'm really glad to see that you succeeded, but mine has been unsuccessful all along and timed out eventually, as shown in the previous picture. Is it necessary to perform special settings when upgrading from version 1.0.1 to 1.2.0? Otherwise, the already configured model list won't be automatically synchronized, and it also prevents the addition of models.

Maybe it's related to existing models that you have added.

But I think you can try https://github.com/langgenius/dify/issues/16858#issuecomment-2754640885

crazywoola avatar Apr 27 '25 02:04 crazywoola

same problem for v1.3.1, after upgrading dify to 1.3.1 and ollama to plugin 0.0.6, all model configed in ollama are missing, and can't add new model in ollama.

minmie avatar Apr 29 '25 08:04 minmie

I solved the problem without any clue. I upgraded Dify to version 1.3.0. According to what @crazywoola mentioned in #16858 (comment), I deleted the task-related items in PostgreSQL, and then the model list was cleared. After re-updating to the ollama 0.0.5 version, the model addition timed out again. I then tried to downgrade ollama to 0.0.4, and unexpectedly, the original model was added back (possibly because I had backed up the volumes before). Finally, thanks to @crazywoola and all of you.

haofengW avatar Apr 30 '25 02:04 haofengW