feat: Add docker and docker-compose support
Adds the ability to run open-interpreter in a container.
Wrote it for my own use, but thought I'd submit a PR if you want it, if not - feel free to close.
Example docker-compose also provided.
- fixes #72
- relates to #188
Hi @sammcj, thanks for the contribution— this looks fantastic and perfectly implemented. may I ask what your use for containerizing it was?
I can imagine it being appealing to let it run rampant in a container to solve something— just feeling safer that it wont damage my system, especially for the less-aligned models
Hey @KillianLucas no worries at all!
It was two fold:
- I saw the project and was like "oh I want to try that" then "oh damn - I'd have to install language dependencies on my server to do so".
I don't generally install anything onto the underlying OS of any servers I run, I always contain software to keep it portable, isolated and disposable. Many modern operating systems don't even have a R/W base OS (e.g. Fedora CoreOS) so you have to run things from containers anyway.
- I like to give back to open source projects when my time allows - so when I see a potentially useful project that needs a PR I know I can do I try to submit one without demanding it gets merged etc... (no skin off my back if the authors don't want or agree with it), so in this case I figured even if I don't use it maybe someone else will.
This is a very cool implementation, though it's adding a lot of potential debt for the ability to use local LLMs within the container.
A simpler and highly usable solution out of the box with openai API would be to simply create a PR that:
- defined a Dockerfile in the root directory of the repository
- added a README.md entry for "Running Open-Interpreter in Docker"
Dockerfile:
# Use the official Python 3.10 image as a base image
FROM python:3.10
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install 'open-interpreter' using pip
RUN pip install open-interpreter
# Default command
CMD ["interpreter"]
Make a sandbox dir in the current working directory
mkdir sandbox
Build the docker image
docker build -t openai-interpreter:latest .
(Optional) Create a .env file and add any environment vars, e.g.
OPENAI_API_KEY=<Your key>
Run open interpreter in docker
docker run -it -v ./sandbox:/sandbox openai-interpreter:latest
Output:
leif@DESKTOP-EN4VCEP:/mnt/c/Users/leifkt/killian/open-interpreter-docker$ docker run -it --env-file .env -v ./sandbox:/sandbox openai-interpreter:latest
▌ Model set to GPT-4
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
>
This would allow us to publish and version the docker image to a public container registry, tied to the specific tag of the open interpreter version, and allow anyone with Docker installed to immediately play with open interpreter locally in a safe sandbox.
A user wouldn't need to have pip, or clone the repo, simply:
docker pull open-interpreter:<version>
docker run -it openai-interpreter:<version>
Sorry for the extreme delay on this Sam! Very clever engineering around the architecture of Open Interpreter at the time, ideally we can make the Docker experience even simpler. Will close this PR in favor of a simpler dockerfile as Leif mentioned.