gaussian-splatting
gaussian-splatting copied to clipboard
Dockerizing
Has anyone successfully dockerized this? I've found this, but it seems to only be for the viewer. I'm trying to dockerize it so that I can run it on a serverless GPU.
I actually just did this over the weekend as running the colab in a jupyter notebook was failing due to a newer glibc version being needed. My version is here: https://github.com/ookami125/gaussian-splatting-docker/tree/docker up to this point I've been able to train a scene and view it (with https://antimatter15.com/splat/). I'm still working on making it easier to use and build as I would like the process of building the container to always work, so when future research comes out this Dockerfile can be adapted. Also I don't know if this is the idiomatic way to structure a Dockerfile, I just did what worked.
Seems this PR also exists, https://github.com/graphdeco-inria/gaussian-splatting/pull/163 I haven't tested it so YMMV.
I've also had a crack at this for running on a remote Ubuntu server accessed with VNC:
Dockerfile
# Chosen to match the CUDA 11.7 installed on this machine
FROM nvidia/cuda:11.7.1-devel-ubuntu22.04
# Install dependencies
ENV DEBIAN_FRONTEND noninteractive
RUN apt update -y
RUN apt install -y build-essential wget
# for viewers
RUN apt install -y libglew-dev libassimp-dev libboost-all-dev libgtk-3-dev libopencv-dev libglfw3-dev libavdevice-dev libavcodec-dev libeigen3-dev libxxf86vm-dev libembree-dev
RUN apt install -y cmake git
# for training
ENV CONDA_DIR /opt/conda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda
ENV PATH=$CONDA_DIR/bin:$PATH
# for dataset conversion
RUN apt install -y colmap imagemagick
# cleanup
RUN apt clean && rm -rf /var/lib/apt/lists/*
# Build viewers
COPY ./SIBR_viewers /gaussian-splatting-build/SIBR_viewers
WORKDIR /gaussian-splatting-build/SIBR_viewers
RUN cmake -Bbuild . -DCMAKE_BUILD_TYPE=Release
RUN cmake --build build -j24 --target install
ENV PATH=/gaussian-splatting-build/SIBR_viewers/install/bin:$PATH
# Setup Python environment
COPY ./environment.yml /gaussian-splatting-build/environment.yml
COPY ./submodules /gaussian-splatting-build/submodules
WORKDIR /gaussian-splatting-build
RUN conda env create --file environment.yml
RUN /bin/bash -c "conda init bash"
RUN echo "conda activate gaussian_splatting" >> /root/.bashrc
# Now mount the actual directory, hopefully
VOLUME /gaussian-splatting
WORKDIR /gaussian-splatting
docker-compose.yml
version: "3.8"
services:
splat:
command: /bin/bash
stdin_open: true
tty: true
build: .
image: xxxxxxxxxx/gaussian-splatting:cuda-11.7
network_mode: host
environment:
- DISPLAY
volumes:
- type: bind
source: "${HOME}/.Xauthority"
target: /root/.Xauthority
- type: bind
source: .
target: /gaussian-splatting
- type: bind
source: "${HOME}/Datasets"
target: /datasets
deploy:
resources:
reservations:
devices:
- driver: nvidia
# count: 1
device_ids: ['2']
capabilities: [gpu]
Can then get into a console with
docker compose run --rm splat
and do all the python train.py ..., python convert.py ..., SIBR_gaussianViewer_app ... etc.
Works great for training, need to run viewers with --no_interop which makes them a bit laggy but is how I stopped getting a segmentation fault and I think that's the only option when attempting to run them remotely.
Thank you guys for sharing! Have you also tried running COLMAP in a container? @lachholden yours probably works with COLMAP if you can run convert
@bmikaili COLMAP ran fine in the container, the only issue was it didn't save the downscaled images because convert.py uses magick convert and the version of imagemagick that got installed in the container for me didn't have the magick command. For my purposes though the 1x images were all I needed, so it hasn't mattered yet.
I also didn't need magick since I was using 1x images. I directly used colmap however as I was having an issue with convert.py at the time. the notebook I added has the code I used to test it https://github.com/ookami125/gaussian-splatting-docker/blob/docker/notebooks/ProcessAndTrain.ipynb I'll probably work on making convert.py work this weekend, I see it has a no-gpu argument which is probably what I needed since I'm running mine on a headless server.
Also since I didn't want to make a docker compose too, I should probably just add a command for building and running the container. I believe I used something along the lines of "docker run --name=Colab --gpus=all -v /srv/docker/colab/content:/root/content gaussian-splatting" Which lacks exposing the port since I have everything through Traefik, but I assume the rest can be figured out. The goal of my container was to replicate the Colab environment (in spirit). So if you're running it on your local machine, I would probably suggest going with @lachholden's method.
You can also check out my dockerized version in my gist.
It also includes installation for FFmpeg and VScode.
I also installed the latest version of imagemagick and built the latest version of colmap from source to give it GPU capability and make it a lot faster. Just remember to set the CMAKE_CUDA_ARCHITECTURES build argument in build_image.sh based on your GPU.
After building and running the container you'll have VScode running on http://localhost:8888 which gives you a nice environment to work and experiment with Gaussian Splatting.