h2ogpt icon indicating copy to clipboard operation
h2ogpt copied to clipboard

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/

Results 420 h2ogpt issues
Sort by recently updated
recently updated
newest added

I have a document that has debugging tips for different problems. When I asked the chatbot that queries the document, it give some sentences that is correct in the beginning...

what are the min req for cpu ? I ran this on i7 32g ram but ran for over 20min with no answer / then killed it used the .env...

When I access this path, it says: ![image](https://github.com/h2oai/h2ogpt/assets/1741341/5b636677-5246-4ba8-8760-3050b946d1b8) But OpenAI says: ![image](https://github.com/h2oai/h2ogpt/assets/1741341/ddda2851-8a4e-4ebe-824f-7f11fc1f4bb2)

can this be installed in a colab? anyone get this working?

huggingface -> Hugging Face

Also see: GPTQ: https://github.com/huggingface/text-generation-inference/pull/438 3x faster llama: https://github.com/turboderp/exllama docker with mounted .cache ``` (h2ollm) jon@pseudotensor:~/h2ogpt/text-generation-inference$ docker run --gpus device=0 --shm-size 1g -e TRANSFORMERS_CACHE="/.cache/" -p 6112:80 -v $HOME/.cache:/.cache/ -v $PWD/data:/data ghcr.io/huggingface/text-generation-inference:0.8...

`(h2ollm) ubuntu@cloudvm:~/h2o-llm$ CUDA_VISIBLE_DEVICES=1 python finetune.py --base_model=decapoda-research/llama-65b-hf --llama_flash_attn=False --train_8bit=True --micro_batch_size=1 --run_id=3 --data_path=h2oai/openassistant_oasst1_h2ogpt_graded &> 3.log`

But users who close their tab or go away will still not be handled. This issue: https://github.com/gradio-app/gradio/issues/4016#issuecomment-1594139152