jmtatsch

Results 176 comments of jmtatsch

No worries. We will just let the maintainer decide.

I tried your request and for me it just goes on generating. ``` ~/Workspace/devika(main*) » docker compose up [+] Running 2/0 ✔ Container devika-devika-backend-engine-1 Created 0.0s ✔ Container devika-devika-frontend-app-1 Created...

I think it can be enabled in config.toml ``` [LOGGING] LOG_REST_API = "true" LOG_PROMPTS = "true" ```

Maybe press reload on the browser again after everything has started up. It seems to hang sometimes such that model selection or search engine selection isn't possible.

@abetlen requested a list of prompt formats for various models Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction:...

Yes, I have a multisplit perfera with BRP069C4x controllers which works as expected.

Maybe we should directly add openblas support? would need those two lines: ``` RUN apt update && apt install -y libopenblas-dev RUN LLAMA_OPENBLAS=1 pip install llama-cpp-python[server] ```

Here is a docker file for a cublas capable container that should bring huge speed ups for cuda gpu owners after the next sync with upstream: ``` FROM nvidia/cuda:12.1.0-devel-ubuntu22.04 EXPOSE...

> @jmtatsch where is `requirements.txt` coming from? good catch, it isn't necessary at all. I cleaned it up above. In 0.1.36 CUBLA is broken anyhow for me, waiting for https://github.com/ggerganov/llama.cpp/pull/1128

@abetlen We should make this two different containers then because the nvidia container with cublas is quite fat and not everyone has a Nvidia card. I will make a pull...