Michael Yang

Results 84 comments of Michael Yang

Some context: Ollama downloads large files in parts with multiple concurrent workers. This maximizes transfer speed allow users to get their files faster. The problem seems to be certain parts...

This should be fixed. We've adjusted the timing to account for slower connections

I found it difficult to follow the different branches. This change reduces the number of branches which should make it easier to follow

> If I run from the terminal Ollama + Google Colab + ngrok, everything works with google colab and ngrok. This suggests there's some issue with langchain when using the...

What platform are you using? I'm not able to reproduce this on macOS Sonoma, Ubuntu 22.04, or Windows 11 Edit: nevermind, I see what you're doing. You're trying to redirect...

This library doesn't use pytorch so I'm not sure what you're asking

what model are you using? your snippet doesn't stream. is it possible the llm is responding but hasn't completed yet? in this mode, ollama will wait until it has the...

Are you using the standard ollama host/port, e.g. `127.0.0.1:11434`? If not, you will need to set `OLLAMA_HOST` or `ollama.Client(host='...')`

Judging by your bio, I'm assuming this output is from an FreeBSD build which is not currently supported.