[WIP] Plugins: Cli: containerize the headlamp cli
- [ ] optimize for image size (in progress, unable to bring below 480MB at the moment)
- [x] Add dockerfile
- [x] Add GA workflow
- [x] Test locally image with docker volumes
- [x] Container scanning with Trivy
- [x] Comment on PR on changes with headlamp-plugin cli code
- [x] Upload the trivy scan report to
Security Tab->Code Scanningon image release
Testing
f@f:~$ docker run --rm -v "$(pwd)/tmp/config/plugin.yml:/app/plugin.yaml" -v "$(pwd)/tmp/out:/app/plugins-dir" headlamp-plugin-cli install --config plugin.yaml --folderName plugins-dir
Installing plugins from config { configPath: 'plugin.yaml' }
Installation chunks: [ [ 'test-app-catalog', 'ai-plugin' ], [ 'prometheus' ] ]
1 of 3 (test-app-catalog): info: Installing plugin test-app-catalog
1 of 3 (test-app-catalog): info: Fetching Plugin Metadata
2 of 3 (ai-plugin): info: Installing plugin ai-plugin
2 of 3 (ai-plugin): info: Fetching Plugin Metadata
1 of 3 (test-app-catalog): info: Plugin Metadata Fetched
1 of 3 (test-app-catalog): info: Downloading Plugin
2 of 3 (ai-plugin): info: Plugin Metadata Fetched
2 of 3 (ai-plugin): info: Downloading Plugin
1 of 3 (test-app-catalog): info: Plugin Downloaded
2 of 3 (ai-plugin): info: Plugin Downloaded
1 of 3 (test-app-catalog): info: Extracting Plugin
1 of 3 (test-app-catalog): info: Plugin Extracted
Moved directory from /tmp/headlamp-plugin-temp-LbaMaH/appcatalog_headlamp_plugin to plugins-dir/appcatalog_headlamp_plugin
1 of 3 (test-app-catalog): success: Plugin Installed
1 of 3 (test-app-catalog): success: Plugin installed successfully
2 of 3 (ai-plugin): error: Checksum mismatch.
3 of 3 (prometheus): info: Installing plugin prometheus
3 of 3 (prometheus): info: Fetching Plugin Metadata
3 of 3 (prometheus): info: Plugin Metadata Fetched
3 of 3 (prometheus): info: Downloading Plugin
3 of 3 (prometheus): info: Plugin Downloaded
3 of 3 (prometheus): info: Extracting Plugin
3 of 3 (prometheus): info: Plugin Extracted
Moved directory from /tmp/headlamp-plugin-temp-aLckDH/prometheus_headlamp_plugin to plugins-dir/prometheus_headlamp_plugin
3 of 3 (prometheus): success: Plugin Installed
3 of 3 (prometheus): success: Plugin installed successfully
Bulk installation completed { total: 3, failed: 1, skipped: 0, successful: 2 }
NOTE : This would require a rebase first, once bulk support PR has been merged
Unknown CLA label state. Rechecking for CLA labels.
Send feedback to sig-contributor-experience at kubernetes/community.
/check-cla /easycla
This pull request sets up GitHub code scanning for this repository. Once the scans have completed and the checks have passed, the analysis results for this pull request branch will appear on this overview. Once you merge this pull request, the 'Security' tab will show more code scanning analysis results (for example, for the default branch). Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results. For more information about GitHub code scanning, check out the documentation.
@knrt10 an initial review here along with other bulk support PR might be helpful.
I've currently brought the image size to 100MB, further reduction of image size would require creating a binary IMO, I did try that also with pkg, but pkg does not support node20 binary, and plugins CLI is not working with node 18. So, I might have to try out some other solutions for making a binary. Just needed a signal: would a binary be required, or 100MB is fine for a CLI tool to be run in a sidecar container?
Can you please add to the PR description why this is needed?
How about using npm global install?
Perhaps using global install is a good idea for the Dockerfile?
Dockerfile:
ARG BASE_IMAGE=node:lts
FROM $BASE_IMAGE
ARG VERSION=latest
WORKDIR /usr/src/app
RUN npm install -g @kinvolk/headlamp-plugin@$VERSION
ENTRYPOINT ["headlamp-plugin"]
To run it defaulting to latest version:
docker build -t headlamp-k8s/headlamp-plugin .
docker run -it --rm headlamp-k8s/headlamp-plugin
Or with a version:
docker build --build-arg VERSION=0.11.4 -t headlamp-k8s/headlamp-plugin:0.11.4 .
docker run -it --rm headlamp-k8s/headlamp-plugin:0.11.4
Watch bash script
I think it should be for windows containers that the watch commands runs in JavaScript rather than using bash. Maybe the watch command could be added to headlamp-plugin?
Perhaps using global install is a good idea for the Dockerfile?
Yes, i did thought of installing using npm package. I was creating seperate containerization and headlamp-plugin container release configuration based on the initial description of the issue..
ARG BASE_IMAGE=node:lts FROM $BASE_IMAGE ARG VERSION=latest WORKDIR /usr/src/app RUN npm install -g @kinvolk/headlamp-plugin@$VERSION ENTRYPOINT ["headlamp-plugin"]
This would create dependency on the npm package. If this is fine approach, then I guess we wont need the release management and maintainance overhead for this container image seperatly.
I think it should be for windows containers that the watch commands runs in JavaScript rather than using bash. Maybe the watch command could be added to headlamp-plugin?
And yes, this would be indeed better, thanks for the feedback.
@illume @knrt10 please leave a reaction/comment to confirm the approach:
- add watch commands to headlamp-plugin itself
- no dockerfile, release management needed
- The containerization part is only relevant to the in-cluster deployment, so lets do the above mentioned Dockerfile as in-line dockerfile of a sidecar container for plugin-management.
Sounds good to me.
@illume @knrt10
installing from npm package is causing a bulky layer in the image, irrespective of global or local install, I've tried with several base images like node:lts-alpine, node:lts, alpine:3.18.
size of some bulky deps of the npm package on installing with RUN npm install --omit=dev --no-cache @kinvolk/headlamp-plugin@$VERSION:
/usr/src/app/node_modules # du -sh * | sort -h | tail
7.7M elkjs
9.3M es-abstract
9.8M @esbuild
10.8M prettier
18.2M @headlamp-k8s
18.7M typescript
31.4M @storybook
55.5M @iconify
67.0M monaco-editor
135.6M @mui
/usr/src/app/node_modules #
a non-global install, not much difference in size though:
f@f:~/w/oss/headlamp/plugins/headlamp-plugin$ docker history headlamp-plugin:latest
IMAGE CREATED CREATED BY SIZE COMMENT
8876b566b74b 2 days ago CMD ["headlamp-plugin"] 0B buildkit.dockerfile.v0
<missing> 2 days ago ENV PATH=/app/node_modules/.bin:/usr/local/s… 0B buildkit.dockerfile.v0
<missing> 2 days ago RUN |1 VERSION=latest /bin/sh -c npm install… 663MB buildkit.dockerfile.v0
<missing> 2 days ago WORKDIR /usr/src/app 0B buildkit.dockerfile.v0
<missing> 2 days ago ARG VERSION=latest 0B buildkit.dockerfile.v0
<missing> 2 days ago RUN /bin/sh -c apk add --no-cache nodejs npm… 59.4MB buildkit.dockerfile.v0
<missing> 2 months ago CMD ["/bin/sh"] 0B buildkit.dockerfile.v0
<missing> 2 months ago ADD alpine-minirootfs-3.18.12-x86_64.tar.gz … 7.36MB buildkit.dockerfile.v0
f@f:~/w/oss/headlamp/plugins/headlamp-plugin$
The total image size with node:lts-alpine and npm install is getting around 700 MB.
Does image size matter this much for our sidecar container? If yes, I'd prefer the original approach as I reached up to 100MB image by trimming node_modules, multistage builds etc.
While there is not much optimization available to reduce this from 700MB with npm package install.
The headlamp-plugin is large because it needs all the packages as frontend/ for testing.
Is it only plugin management commands that need to be run? We did have a plan to make plugin-management its own package https://github.com/kubernetes-sigs/headlamp/issues/2197 . Because it is also used by the app. plugin management by itself should be a much smaller image.
I don’t know how much time is left for your project, and if this headlamp-k8s/plugin-management package could be done in time?
it could include a cli with only the plugin management things in there. A copy of the headlamp-plugin with other stuff removed (no test, format, lint, etc).
If you don’t have enough time to do the plugin-management package… Maybe call the container image headlamp-plugin-management, and just use headlamp-plugin package in there for now. This way the npm package used could be swapped out fairly easily later.
Then it won’t matter too much that the image is larger for now.
I hope I am being clear?
—-
I’m not sure if it’s worth considering moving most dependencies into devDependencies? Maybe it’s possible to move most there that are needed for testing/tsc/build/lint. Then it could be installed with npm ci (or omit dev) and commands like creat and the plugin management ones could be done. But then the container image could only be used for these commands. So after thinking this through, I think plugin-management should be its own package?
Maybe call the container image headlamp-plugin-management, and just use headlamp-plugin package in there for now. This way the npm package used could be swapped out fairly easily later.
Alright, understood, I'll be doing the above approach; however, I might probably come back to implement this plugin-management package later if time permits, or even post LFX next month.
@illume I've implemented the above discussed approach, tested by publishing package temporarily on my account and building image of headlamp server locally.
NOTES: The latest v0.30.0 version of ghcr.io/headlamp-k8s/headlamp image does not include the --watch-plugins-changes flag. So would require a release for that image. Also, the headlamp-plugin package would require a release, though I could not see an NPM package release workflow. Does the headlamp do this manually from the terminal with npm publish?
Updated PR description, please let me know about the release of docker image and npm package mentioned in the notes. Let me know if I need to separate out PR for headlamp-plugin changes for it.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: Faakhir30, illume
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [illume]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@Faakhir30 There is now a @kinvolk/[email protected]
The release is delayed because of some certificate signing issues. I don't have any clarity on when the container image will be released again. But I'll let you know when I do.