localllm icon indicating copy to clipboard operation
localllm copied to clipboard

Results 7 localllm issues
Sort by recently updated
recently updated
newest added

I'll submit a PR shortly for this trivial fix. Running `llm ps` or `llm kill` on my poor, tired development system resulted in: ``` $ llm ps Traceback (most recent...

# Install the tools pip3 install openai pip3 install ./llm-tool/. llm run TheBloke/Llama-2-13B-Ensemble-v5-GGUF 8000 python3 querylocal.py Actual Result: Works! Run python3 querylocal.py again Fails *************http://localhost:8000/v1************* Traceback (most recent call last):...

https://github.com/GoogleCloudPlatform/localllm/blob/d27376fa3f6e6bcfcd3ae9c9c8f61e163a3c1899/llm-tool/setup.py#L19-L23 And: https://github.com/GoogleCloudPlatform/localllm/blob/d27376fa3f6e6bcfcd3ae9c9c8f61e163a3c1899/llm-tool/setup.py#L34-L38 I'm the author of https://pypi.org/project/llm/ which installs a package called `llm` and a CLI tool called `llm` as well. My `llm` tool is similar to localllm in...

update point 6 in documentation to explicitly state cluster creation takes approximately 20 minutes and need to wait for it before moving forward.

In this code, I've made minor formatting improvements for better readability and adherence to PEP 8 style guidelines. The script now uses #!/usr/bin/env python to make it more portable, and...

When we start the process running llama-cpp-python, we provide a pipe for stderr and then promptly close it. This means if llama-cpp-python tries to write to stderr, a broken pipe...