Cguanqin
Cguanqin
> @Zbrooklyn更改自定义模型文件中的“num_thread”[参数](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#parameter)。 hei,bro~ it's still using 50% of the cores and 50% of RAM after modifying Modelfile that increased num_thread.from 8 to 16 。 my Modelfile is as follows: FROM...
> @Cguanqin You are probably doing something wrong. Try to write down reproducible steps. That will probably reveal the mistake. oh,I don't know where the problem is. I want to...
[root@iZwz9fjpavyfd2ybhfxx1lZ gemma_pytorch]# docker run -t --rm \ > -v /tmp/ckpt:/tmp/ckpt \ > gemma:pytorch \ > python scripts/run.py \ > --ckpt=/tmp/ckpt \ > --variant="2b" \ > --prompt="The meaning of life is"...