frob

Results 724 comments of frob

Contents of `ollama_install.sh`, `entry_script.sh`? What's the base for `company-image`?

I substituted `company-image` with `cgr.dev/chainguard/wolfi-base` and commented out the the stuff related to building the app. `entry_script.sh` was changed to: ```sh #!/bin/sh (sleep 2 ; ollama run mistral "") &...

What do you have `OLLAMA_HOST` set to in the `data-extractor` container? What's the result of `docker exec -it data-extractor env`?

```sh #!/bin/sh ollama serve & sleep 2 ollama run mistral "" cd /function if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then exec /usr/local/bin/aws-lambda-rie /function/venv/bin/python -m awslambdaric $1 fi exec /function/venv/bin/python -m awslambdaric...

If you want the python script to be able to exit without killing the container: ```sh #!/bin/sh ( sleep 2 ollama run mistral "" cd /function if [ -z "${AWS_LAMBDA_RUNTIME_API}"...

https://github.com/ollama/ollama/issues/1053#issuecomment-2558705136

``` juin 17 13:42:38 strix ollama[43954]: CUDA error: the provided PTX was compiled with an unsupported toolchain. ``` Probably need to raise the issue with the maintainer of the ebuild.

Thinking is enabled in the template only if tools and document processing is not specified. ``` {{- /* Prompt without tools or documents */}} {{- if (and (not .Tools) (not...

[Server logs](https://github.com/ollama/ollama/blob/main/docs/troubleshooting.md#how-to-troubleshoot-issues) may aid in debugging, but the likely cause of no layers on GPU is that the minimum free VRAM that ollama wants before allocating layers is not available...

The OS using all RAM is normal, it's called the [page cache](https://en.wikipedia.org/wiki/Page_cache). It's not an ollama memory leak.