Fabric
Fabric copied to clipboard
[Feature] Added Dockerfile which takes base Python image, installed pipx & fabric
What this Pull Request (PR) does
Introduces a Dockerfile configured to set up environment for working with Fabric.
- Installs, git, build-essential, ffmpeg.
- Installs and Congifures pipx
- Clones fabric repo
- Sets working dir to
/app - Set entry point to /
usr/bin/bash/
Note: This Dockerfile is not hardened for security purposes. If you mount this container to your file system you expose yourself highly to the container. Any users should probably made aware of this before suggesting it's use.
Feedback and suggestions for improvements and hardening are welcome! 🐱
Related issues
#25 closes #275
This is great! And I suggest something like this:
FROM python:3.12.2-slim
# Install required packages
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
git \
build-essential \
ffmpeg \
sudo \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# Create a non-root user
RUN useradd --create-home appuser \
&& echo "appuser ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/appuser \
&& chmod 0440 /etc/sudoers.d/appuser
USER appuser
# Set up work directory
WORKDIR /home/appuser/app
# Install pipx
RUN python3 -m pip install --upgrade pip \
&& python3 -m pip install --user pipx \
&& python3 -m pipx ensurepath
# Clone the repository and install its dependencies
RUN git clone https://github.com/danielmiessler/fabric.git \
&& python3 -m pipx install ./fabric
# Set file permissions
RUN chmod -R 755 /home/appuser/app
# Use a least privileged user
USER appuser
# Set the entrypoint
ENTRYPOINT ["/usr/bin/bash"]
This is a bit better, security-wise, and you can mount your ~/.config/fabric inside the container to use your API keys.
docker run --rm -i -t -v C:\Users\kayvan\.config\fabric\:/home/appuser/.config/fabric fabric:latest
appuser@07eba7feb23e:~/app$ ls -la ~/.config/fabric/
total 12
drwxrwxrwx 1 root root 4096 Apr 6 20:34 .
drwxr-xr-x 3 root root 4096 Apr 8 06:57 ..
-rwxrwxrwx 1 root root 250 Mar 30 23:58 .env
-rwxrwxrwx 1 root root 4970 Apr 6 20:34 fabric-bootstrap.inc
drwxrwxrwx 1 root root 4096 Apr 6 20:34 patterns
appuser@07eba7feb23e:~/app$ fabric --listmodels
GPT Models:
gpt-3.5-turbo
gpt-3.5-turbo-0125
gpt-3.5-turbo-0301
gpt-3.5-turbo-0613
gpt-3.5-turbo-1106
gpt-3.5-turbo-16k
gpt-3.5-turbo-16k-0613
gpt-3.5-turbo-instruct
gpt-3.5-turbo-instruct-0914
gpt-4
gpt-4-0125-preview
gpt-4-0613
gpt-4-1106-preview
gpt-4-1106-vision-preview
gpt-4-turbo-preview
gpt-4-vision-preview
Local Models:
Claude Models:
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-haiku-20240307
claude-2.1
Or even use local OLLAMA models running on the host like this:
appuser@07eba7feb23e:~/app$ export OLLAMA_HOST=host.docker.internal
appuser@07eba7feb23e:~/app$ fabric --listmodels
GPT Models:
gpt-3.5-turbo
gpt-3.5-turbo-0125
gpt-3.5-turbo-0301
gpt-3.5-turbo-0613
gpt-3.5-turbo-1106
gpt-3.5-turbo-16k
gpt-3.5-turbo-16k-0613
gpt-3.5-turbo-instruct
gpt-3.5-turbo-instruct-0914
gpt-4
gpt-4-0125-preview
gpt-4-0613
gpt-4-1106-preview
gpt-4-1106-vision-preview
gpt-4-turbo-preview
gpt-4-vision-preview
Local Models:
codellama:13b
codellama:latest
dolphincoder:latest
llama2:13b
llama2:latest
mistral:latest
starcoder2:15b
Claude Models:
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-haiku-20240307
claude-2.1
Awesome feedback! @ksylvan I've added in the recommended changes. The mounting of the config file is also very clean.
Thanks!
nice work! I was about to try to dockerise fabric and you've already done it :)
Thanks! @gilesheron
Side note for everyone that comes across this PR: I'll update the fork occasionally with the latest updates, but the dockerfile itself will clone from the main repo. So you won't have to worry about updating there. (If fabric moves to 3.13 support I'll make sure to update dockerfile to the latest python base image).
This is amazing work but we're migrating to go before too long, so it won't be needed. But THANK YOU.
Sounds good! Looking forward to the migration! 😄
@7kevin49 This is great thanks for this, I was trying to do this myself until I found out your message, one question and sorry if this is a dumb question I'm new in all this, I got my container set up and all looks good but the container do not keep running, is there anything I need to do in order for the container to stay running?
@7kevin49 This is great thanks for this, I was trying to do this myself until I found out your message, one question and sorry if this is a dumb question I'm new in all this, I got my container set up and all looks good but the container do not keep running, is there anything I need to do in order for the container to stay running?
@Ishidad fabric isn't the kind of command you keep running usually, so I don't understand your question. The idea of the container is to set up another easy packaging of the whole stack and make it accessible to more people.
What is it you're trying to do with the container?
Thanks for your reply, I friend help me understand what you mentioned, my idea was to connect to the container to run the prompts as I want to have it in a server and not in my local machine
This is amazing work but we're migrating to go before too long, so it won't be needed. But THANK YOU.
Out of curiosity why would migrating to Go require closing and not merging a working Dockerfile for the project as-is? As an engineer, I'd rather run containerized services than to install global packages for individual projects on bare metal that cause "works on my machine" type problems elsewhere.
Is there a practical need for user shenanigans @ksylvan suggested? Why should we care about permissions inside docker container that has a specific purpose?
they have switched to go. is this going to be updated?
think there should be a .env file to set the following:
local_ollama_server_address: {localhost:11434}
local_default_ollama_model: {llama3.1}
openai_key:
claude_key:
This is amazing work but we're migrating to go before too long, so it won't be needed. But THANK YOU.
sorry but containerising this project is important. I deploy my ai-stack which consists on the following services all in one docker network and docker-compose. Otherwise I will be dealing with dependency hell forever:
ollama
open-webui
whisper
stable diffusion
searxng
fabric (yet to be deployed using docker)
Thanks for your reply, I friend help me understand what you mentioned, my idea was to connect to the container to run the prompts as I want to have it in a server and not in my local machine
use docker run -d which is to run the container as a daemon in the background and you can connect to it as and when required
Is there a practical need for user shenanigans @ksylvan suggested? Why should we care about permissions inside docker container that has a specific purpose?
It's just generally good practice. Some rules around docker require non-root be used for running the applications as well.
This is amazing work but we're migrating to go before too long, so it won't be needed. But THANK YOU.
sorry but containerising this project is important. I deploy my
ai-stackwhich consists on the following services all in one docker network and docker-compose. Otherwise I will be dealing with dependency hell forever:ollama open-webui whisper stable diffusion searxng fabric (yet to be deployed using docker)
You can change to a different commit within the Dockerfile after cloning the repo. Just be aware you'll be running outdated/unmaintained code.
You can change to a different commit within the Dockerfile after cloning the repo. Just be aware you'll be running outdated/unmaintained code.
are you planning to dockerise the new go version by any chance? @7kevin49 ??
I can give it a shot later today. Looks like it should be rather straight forward.
I can give it a shot later today. Looks like it should be rather straight forward.
That would be great, thanks!
I created one already...
There is a PR here (https://github.com/danielmiessler/fabric/pull/845)