gpt4all
gpt4all copied to clipboard
add dockerfile
Quickly Demo
$ docker build -t nomic-ai/gpt4all:1.0.0 . $ docker run -it --rm -v ./gpt4all-lora-quantized.bin:/opt/gpt4all nomic-ai/gpt4all:1.0.0
Multi Arch
$ docker buildx build --platform linux/amd64,linux/arm64 --push -t nomic-ai/gpt4all:1.0.0 .
Mac Intel
$ docker buildx build --platform linux/amd64 --push -t nomic-ai/gpt4all-amd64:1.0.0 . $ docker run -it --rm -v ./gpt4all-lora-quantized.bin:/opt/gpt4all nomic-ai/gpt4all-amd64:1.0.0 /opt/gpt4all/gpt4all-lora-quantized-OSX-intel -m /opt/gpt4all/gpt4all-lora-quantized.bin
Mac m1
$ docker buildx build --platform linux/arm64 --push -t nomic-ai/gpt4all-arm64:1.0.0 . $ docker run -it --rm -v ./gpt4all-lora-quantized.bin:/opt/gpt4all nomic-ai/gpt4all-arm64:1.0.0 /opt/gpt4all/gpt4all-lora-quantized-OSX-m1 -m /opt/gpt4all/gpt4all-lora-quantized.bin
If your going to include the binary, why not include the model as well?
And this would only work for a linux x86 image. Would be good to make one for arm as well.
maybe a multi-arch?
- model ? you mean gpt4all-lora-quantized.bin, it download by wget when build image
- yes, working on it.
All I'm saying is that each time you build the image, it will download a huge file. Might as well either: download the binary and the model, or copy both of them from the local workspace.
But nice work on the contribution! 💪
If your going to include the binary, why not include the model as well?
And this would only work for a linux x86 image. Would be good to make one for arm as well.
maybe a multi-arch?
- multi-arch, i use docker buildx to build image of amd64 and arm64, both are fine. but i can not test arm64, could you test it? if it is ok, maybe add these instructions to README.md
- local model file, i change some in Dockerfile.
Out of date