Add Docker deployment capability
Add Docker deployment capability to the project.
-
Dockerfile: Create a
Dockerfileto define the Docker image for the project using Python 3.12, copy project files, install dependencies, and set the entry point. -
docker-compose.yml: Create a
docker-compose.ymlto define the Docker services, including building the project service, exposing necessary ports, and setting environment variables. - .circleci/config.yml: Add a job to build and push the Docker image to a registry, and update the workflow to include the new Docker jobs.
-
README.md: Add instructions to build and run the Docker image, and use
docker-composeto run the services.
For more details, open the Copilot Workspace session.
Did you test this on Apple Silicon?
@jincdream if you need any help testing on Apply Silicon, let me know - would be glad to assist if you do not have access to it.
@jincdream if you need any help testing on Apply Silicon, let me know - would be glad to assist if you do not have access to it。
That's great! I haven't tested it yet and I'm so glad you're willing to lend a hand. Looking forward to working with you on this. Thanks a bunch!
Hello! I have also created a Docker version, but GPU acceleration is a bit more complicated.
However, this results in a large disk image, and it's important that the host machine's CUDA version matches the CUDA version inside the Docker container; otherwise, it won't work.
Currently, I am only using this for testing:
Dockerfile
FROM nvidia/cuda:12.4.0-devel-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get -yqq update \
&& apt-get -yqq dist-upgrade \
&& apt-get -yqq upgrade \
&& apt-get -y --no-install-recommends install \
python3 \
python3-pip \
python3-venv \
python3-clang \
clang \
libgl1 \
libglib2.0-0 \
git \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m -d /home/container container \
&& usermod -aG adm,audio,video container
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER container
RUN git clone https://github.com/exo-explore/exo.git ~/exo \
&& python3 -m venv ~/exo/.venv \
&& cd ~/exo \
&& . ~/exo/.venv/bin/activate \
&& pip3 install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 \
&& pip3 install --no-cache-dir -e . \
&& cd ~ \
&& deactivate
EXPOSE 52415/tcp
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/usr/bin/env bash
cd ~/exo
source ~/exo/.venv/bin/activate
exo ${EXO_ARGS}
docker-compose.yml
services:
docker_exo:
network_mode: bridge
build: .
container_name: docker_exo_container
restart: unless-stopped
ports:
- "52415:52415"
environment:
- NVIDIA_DRIVER_CAPABILITIES=gpu,utility,video,compute
- EXO_ARGS=--discovery-module=udp --data=/opt/exo-data --models-seed-dir=/opt/exo-seed
volumes:
- ./opt/exo-data:/opt/exo-data
- ./opt/exo-seed:/opt/exo-seed
#devices:
#- "/dev/dri:/dev/dri" # GPU
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu,utility,video,compute]
With this approach, exo recognizes the GPU.
With similar Dockerfile I get image of around 10GiB, this is a bit insane :)
related: https://github.com/exo-explore/exo/pull/770
Thanks so much for your contribution and for taking the time to open this PR.
Since this repository has been fully rewritten and the license has changed, I’m closing all existing open PRs to avoid confusion and to align with the new codebase.
I really appreciate your interest in the project, and you’re very welcome to open a new PR against the updated version if you’d like and we look forward to reviewing it!