hopperelec

Results 89 comments of hopperelec

Yes, you can either abort all ongoing request (see [any request example](https://github.com/ollama/ollama-js/blob/main/examples/abort/any-request.ts)) or you can abort a specific request (see [specific request example](https://github.com/ollama/ollama-js/blob/main/examples/abort/specific-request.ts))

The HTTP standard does not support compression for requests because it would require a pre-request to identify if the server is capable of decompressing it. When using APIs, compression support...

If you want to abort all requests (or, for simplicity if you will only ever have one ongoing request), you could use `ollama.abort()` like in this example https://github.com/ollama/ollama-js/blob/57fafae5d5e79e78f0c3abdcd2e18e7ff5fd1329/examples/abort/any-request.ts#L1-L27 except, in...

Oh well, by default, Ollama removes the model from memory after 5 minutes, so that could be what's causing this. See [the Ollama FAQ](https://github.com/ollama/ollama/blob/89c79bec8cf7a7b04f761fcc5306d2edf47a4164/docs/faq.md#how-do-i-keep-a-model-loaded-in-memory-or-make-it-unload-immediately) for more information. I would have...

You will have to be a bit clearer, sorry

Ollama doesn't have a specific feature which calls APIs for you, but you can set `format: "json"` to force the LLM to output in JSON format, and then you could...

No, because Ollama uses Typescript. However, you could download the source and compile it to Javascript then use that.

Just tested this exact same code and the same doesn't happen for me

I hadn't actually seen `preload` so thanks for showing me this! However, I couldn't get it working. `await preload` doesn't throw an error. This seems like it should be the...

That's an interesting workaround. It doesn't work if the target route uses a layout which returns data but, to get around that, I can pass the number of keys (hard-coded)...