uffizzi
uffizzi copied to clipboard
Non-empty volumes + CLI
We need to be able to use the non-empty volumes with CLI
@zipofar Please describe possible implementation options
I see two variants to solve this issue:
- Send files to some storage and than copy it to container with volume
- Explicitly define container which contain files for volumes and than copy that files from defined container to container with volume
@axisofentropy What are your thoughts on the above?
I don't know if the original ticket specified, but we want to limit this use case to very small files, like configuration files. Small enough that we can archive and compress them (probably tar
and gzip
) and then base64 encode them and include them with the configuration payload.
@axisofentropy At this point I suppose that config and env files does not work with GHA. Because we do not have project files in CLI container. We just provide cached compose file for CLI.
Probably we should add some flag for describe path for dependencies (e.g. dependency_root_path
). In that case we can forward project to CLI with volumes.
For example:
docker run -v "${pwd}":/app args preview create compose-file.yml --dependency_root_path /app uffizziuffizzi/cli
Does it make sense?
And can you specify limit size for tarball per volume?
When our CLI runs within a GHA, the command looks something like this:
/usr/bin/docker run --workdir /github/workspace --rm -e INPUT_GHCR-ACCESS-TOKEN -e UFFIZZI_PASSWORD -e UFFIZZI_SERVER -e UFFIZZI_USER -e UFFIZZI_PROJECT -e GITHUB_USERNAME -e GITHUB_ACCESS_TOKEN -e HOME -e GITHUB_ACTIONS=true -e CI=true -v "/home/runner/work/example-voting-app/example-voting-app":"/github/workspace" uffizzi/cli:latest "preview" "create" "--output=github-action" "docker-compose.rendered.yml"
So GitHub does mount the workspace and also specifies the container process to execute within that same directory. That's why we can specify the compose file without a path; uffizzi preview create
executes within /github/workspace
, which is where the compose file is provided.
We can easily add the actions/checkout
action to our reusable workflow and examples to support adding files from a repository to a non-empty volume. But I think it will be more common for users to add files that are not in the repository, including credentials and other secrets.
The "dependency root path" should always be the current working directory (specified by docker run --workdir
.) It is up to the user (or the CI/CD author) to provide the CLI container with the appropriate files and working directory configuration.
The size limit should be driven by our backend's limitations and whatever that bottleneck is. That number could change in testing. The bottleneck may be in Postgres or it may be in HTTP requests to the controller or somewhere else. Be sure to test payloads at and near the maximum size to uncover problems, and then if reducing the maximum size would make implementation easier, do that.
To start, let's try 65,536
bytes or 64 kibibytes. This limit should probably be on the encoded (base64?) value that will be passed through our database and controller. Values larger than this should cause an exception and fatal error within our CLI, with a clear error message to the user.
And one more question. Our controller receive base64 content for volumes. But how this content can be sent to InitContainer? I thought that we can add pvc for controller and write decoded base64 content to file in this pvc. But we can't attach that pvc in deployment namespace. Am I wrong?
And one more question. Our controller receive base64 content for volumes. But how this content can be sent to InitContainer? I thought that we can add pvc for controller and write decoded base64 content to file in this pvc. But we can't attach that pvc in deployment namespace. Am I wrong?
We can pass the encoded string verbatim to the initContainer and it can decode, decompress and unarchive the files into the volume. I don't know if the best way to get the string into the initContainer is by environment variable or command
argument or a ConfigMap
or something else. Again you may run into size limits and that's ok; enforce the new limit within the CLI.