david l euler
david l euler
I've the same problem on mac m1. I use colima as container. Finally fixed it by soft link the socket file as: ``` sudo ln -s -f /Users/david/.colima/docker.sock /var/run/docker.sock sudo...
A fork with bunch of these fix and the Dockerfile, docker-compose, so that you can run the project in one command: https://github.com/davideuler/PowerPaint
The same error on my warp.
I guess Mistral Medium maybe the mistral 70b instruct, or some MOEs like mixtral 8x7b. If independent datasets are merged to be a large matrix it would be perfect to...
The result for deepseek-coder-33b-instruct is a big surprise.
It is the same on my m1 Mac Studio, llama_cpp_python==0.2.43: * When running python code, it takes 16s on the "load time" , 3.4 tokens/sec. The GPU usage is about...
> same for me @hbacard . I first had llama-cpp-python 0.2.27 installed and now upgraded to 0.2.39 and with the newer version my responses take 15s longer than before I've...
8gb memory it not enough to run 7b quantized model smoothly. Even on my 18GB mac m3, it is slow. You need more memory on m1/m2/m3. For intel chip, it...
A fork with bunch of these fix and the Dockerfile, docker-compose, so that you can run the project in one command: https://github.com/davideuler/PowerPaint
It show "Frame processor face_enhancer not found" when running on mac silicon. And an empty interface shows on the GUI. ``` python run.py --execution-provider coreml ```