h2ogpt icon indicating copy to clipboard operation
h2ogpt copied to clipboard

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/

Results 420 h2ogpt issues
Sort by recently updated
recently updated
newest added

May I possibly have specific instructions as for the GPU installation of the tool? I have followed the installation but it still says no GPU detected. I have the following...

Hi, every time I run the python generate.py command, it always results in this error: End auto-detect HF cache text generation models Begin auto-detect llama.cpp models End auto-detect llama.cpp models...

Hello all, I wonder if someone can tell me, when using model_lock to deploy mutliple inferencing gradio services, can I specifiy different LLM control parameters (temperature, top p, top N,...

This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of...

This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of...

This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of...

This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of...

This PR was automatically created by Snyk using the credentials of a real user.Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of...

The `h2ogpt` linux installation method as [given here](https://github.com/h2oai/h2ogpt?tab=readme-ov-file#get-started) is as follows: ### A. Variable export instructions: `export PIP_EXTRA_INDEX_URL="https://download.pytorch.org/whl/cu118 https://huggingface.github.io/autogptq-index/whl/cu118"` `export LLAMA_CUBLAS=1` `export CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=all"` `export FORCE_CMAKE=1` ### B. Then, one...