Alex Mitchell
Alex Mitchell
So Ive spent about 10 hours trying to find a way to either get H.264 working with CEF or write a small web server that can stream videos from OBS...
Yeah I've been having this issue all day. I tried increasing the memory using `docker update --memory` and `docker update --memory-swap` but I'm not that experienced with docker and a...
Not using a vm, here's the output from that command: ``` processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 151 model name : 12th Gen Intel(R)...
Potential duplicate of #47, my bad.
These steps worked for me when I was getting a similar issue for the 7B model: -Launch Docker Desktop and delete the llama-gpt-llama-gpt-api-70b-1 container -Delete llama-2-70b-chat.bin from your models folder...