Chris Ward

Results 113 comments of Chris Ward
trafficstars

then run this badboy; ``` CUDA_VISIBLE_DEVICES=0 python "main.py" --base configs/stable-diffusion/v1-finetune_unfrozen.yaml -t --actual_resume sd-models/model.ckpt -n ChrisBWardProject --gpus 0, --reg_data_root ./regularization_images/person_ddim --data_root ./training-images/resized --max_training_steps 2000 --class_word person --token chrisbward --no-test ```

+1 would love to see some shaders in the mix!

``` which jupyter /home/user/miniconda3/bin/jupyter ``` determined that it wasn't respecting the venv virtual environment So I then did the following; ``` pip3 install ipykernel ipython kernel install --user --name=venv ```...

Hi, using `Meta-Llama-3-8B-Instruct.Q5_K_M.gguf` from https://huggingface.co/PrunaAI/Meta-Llama-3-8B-Instruct-GGUF-smashed Following @MoonRide303 's Modelfile, I wrote this ``` FROM ./Meta-Llama-3-8B-Instruct.Q5_K_M.gguf TEMPLATE """{{ if .System }}system {{ .System }}{{ end }}user {{ .Prompt }}assistant {{ .Response...

I did that as a test and I got; ``` ➜ ~ ollama run llama3-8B-instruct-gguf-q6-k >>> hello Hello! It's nice to meet you. Is there something I can help you...

``` >>> /show modelfile # Modelfile generated by "ollama show" # To build a new Modelfile based on this one, replace the FROM line with: # FROM llama3-8B-instruct-gguf-q6-k:latest FROM /usr/share/ollama/.ollama/models/blobs/sha256-13c5c30a3c9404af369a7b66ce1027097ce02a6b5cc0b17a8df5e414c62d93f6...

Interesting! The model I downloaded, does not match the sha256 checksum

Apologies - seems like I grabbed the quant from https://huggingface.co/lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF

@MoonRide303 perfect - confirming that https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF is the way to go, with your Modelfile config, thank you!