rover
rover copied to clipboard
refactor: Dockerfile and CI
Overview
- Prod image from scratch base
- Use
docker-bakeandbuildxfor building parallel multi stage docker image with better caching support withgha - Use
-s -wldflags and trimpath more here - Use UPX binary compression, more here
- Create
/tmpdir as its necessaryfor rover - Copy certs for terraform
- Add
npm buildas one testing workflow
go build = 24MB
-rwxr-xr-x 1 pratik pratik 17596884 Dec 18 19:28 rover
go build -trimpath -ldflags "-s -w" = 19M
-rwxr-xr-x 1 pratik pratik 19542016 Jan 14 19:43 rover
go build -trimpath -ldflags "-s -w" + UPX = 4.8M
-rwxr-xr-x 1 pratik pratik 4999224 Jan 14 19:43 rover
Terraform + UPX = ~13 MB from
Ultimate Packer for eXecutables
Copyright (C) 1996 - 2020
UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
File size Ratio Format Name
-------------------- ------ ----------- -----------
19542016 -> 4999224 25.58% linux/amd64 rover
Packed 1 file.
Terraform + UPX = 60M to 13MB (part of slim container image)
Ultimate Packer for eXecutables
Copyright (C) 1996 - 2020
UPX 3.96 Markus Oberhumer, Laszlo Molnar & John Reiser Jan 23rd 2020
File size Ratio Format Name
-------------------- ------ ----------- -----------
61956096 -> 12771252 20.61% linux/amd64 terraform
May the source be with you
Thanks for submitting this PR!
I have spotty internet for this week and don't have access to my laptop, but I'll review and merge once I get the chance
Wanted to express how much I appreciate your work and I'm not just ignoring it 😄
@im2nguyen Do you have any plans for multi platform container images? I can contribute in that
Hey @pratikbalar, what do you mean by multiplatform container images?
Hey @pratikbalar, what do you mean by multiplatform container images?
Okay i don't know if you know but as multiple architecture binaries we are generating in release, we can also create multiple architecture container images
You can see golang docker hub image

E.g. let say if we pull golang:1.17, docker running in raspberry pi will automatically pull arm version of container tag and like so for other platforms available
More here
Oh that's neat! Multi-arch builds would be a neat addition!
See, I'm using UPX for binary compression, so i would suggest maintaining UPXed version with slim postfix separately. What do you think?
Do you want to try ? In your local ?
Hello! sorry i got back to you so late, can you provide instructions on how I should test locally?
I can see some changes in Dockerfile and all in main branch, so will update you here once I confirm everything
@im2nguyen Do you think that image generation feature worth 340+MB in uncompressed image?
That's a good question. I think we can make it optional?
One with image gen (chromium), the other one with it?
What do you think @pratikbalar?
One with image gen (chromium), the other one with it?
Umm, I'm not with it. Because that will start to make confusion, and then we'll add slim variant, Also chromium is only available for arm64 and amd64 architectures. So I guess we have to find some other way to get generate image(picture).
https://pkgs.alpinelinux.org/packages?name=chromium&branch=edge

@im2nguyen Well I talked to my friends, They haven't used these but based on their knowledge they suggested me :arrow_down:
https://github.com/chromedp/chromedp https://github.com/sensepost/gowitness
let me know if you want me to involve more
Plus, one of my friend also suggested running ngrok and querying public API to create screenshot of it.
It's already using chromedp to access Chrome headlessly. The problem is that it still needs access Chrome to exists in the environment.
This looks like a promising solution. https://hub.docker.com/r/chromedp/headless-shell/
Could we use this as a base and move the Terraform and rover binary?
Always appreciate contributions 😄
Just let me know how I should test your changes. If it benefits the product for everyone (it does in this case), I'll be more than happy to merge
Could we use this as a base and move the Terraform and rover binary?
Can you give me little time for this :smile:
Just let me know how I should test your changes. If it benefits the product for everyone (it does in this case), I'll be more than happy to merge
I haven't merge chrome changes, that's why. Let me push current changes.
Also, do you/we want to maintain Makefile for all ?
Moved to first comment
Nice! Is there a way for me to test this locally?
Well, I don't know if you noticed, but I've edited README Yesterday
see https://github.com/pratikbalar/rover/tree/feat/ci-and-dockerfile#container-image
@im2nguyen should I separate all the changes in different PR ? And have you tried, -genImage I think it's working as expected without chrome drivers.
And I don't know why, but now I'm getting this error on npm build
=> ERROR [ui 7/7] RUN npm run build 15.2s
------
> [ui 7/7] RUN npm run build:
#31 0.360
#31 0.360 > [email protected] build
#31 0.360 > vue-cli-service build
#31 0.360
#31 0.880
#31 0.880 - Building for production...
#31 15.16 ERROR Failed to compile with 1 error5:14:12 AM
#31 15.16
#31 15.16 error in ./src/components/ResourceDetail.vue?vue&type=script&lang=js&
#31 15.16
#31 15.16 Module parse failed: Unexpected token (182:46)
#31 15.16 File was processed with these loaders:
#31 15.16 * ./node_modules/cache-loader/dist/cjs.js
#31 15.16 * ./node_modules/thread-loader/dist/cjs.js
#31 15.16 * ./node_modules/babel-loader/lib/index.js
#31 15.16 * ./node_modules/cache-loader/dist/cjs.js
#31 15.16 * ./node_modules/vue-loader/lib/index.js
#31 15.16 You may need an additional loader to handle the result of these loaders.
#31 15.16 |
#31 15.16 | getResourceConfig(resourceID, model, isChild) {
#31 15.16 > let configID = model.states[resourceID]?.config_id ? model.states[resourceID]?.config_id : resourceID.replace(/\[[^[\]]*\]/g, "");
#31 15.16 | let config;
#31 15.16 | if (isChild) return {
#31 15.16
#31 15.16 @ ./src/components/ResourceDetail.vue?vue&type=script&lang=js& 1:0-328 1:344-347 1:349-674 1:349-674
#31 15.16 @ ./src/components/ResourceDetail.vue
#31 15.16 @ ./node_modules/cache-loader/dist/cjs.js??ref--13-0!./node_modules/thread-loader/dist/cjs.js!./node_modules/babel-loader/lib!./node_modules/cache-loader/dist/cjs.js??ref--1-0!./node_modules/vue-loader/lib??vue-loader-options!./src/App.vue?vue&type=script&lang=js&
#31 15.16 @ ./src/App.vue?vue&type=script&lang=js&
#31 15.16 @ ./src/App.vue
#31 15.16 @ ./src/main.js
#31 15.16 @ multi ./src/main.js
#31 15.16
#31 15.16 ERROR Build failed with errors.
------
Dockerfile:12
--------------------
10 | COPY ./ui/public ./public
11 | COPY ./ui/src ./src
12 | >>> RUN npm run build
13 |
14 | FROM --platform=$BUILDPLATFORM alpine:3.15 as terraform
--------------------
error: failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
Hey @pratikbalar, that was due to recent changes that included optional chaining.
Did you pull + merge from the main branch? I added this file to babel to address a while back (https://github.com/im2nguyen/rover/blob/2eaa45bda0d654053b2460ea6f8ec085fa78fe88/ui/babel.config.js#L3)
@im2nguyen Yeah, I've pull + merged main. Still getting same error.
For this kind of situations, I've created a npm build as a testing workflow for which will only triggers on changes in ui/ directory
Was it able to resolve it?
And I don't know why, but now I'm getting this error on
npm build=> ERROR [ui 7/7] RUN npm run build 15.2s ------ > [ui 7/7] RUN npm run build: #31 0.360 #31 0.360 > [email protected] build #31 0.360 > vue-cli-service build #31 0.360 #31 0.880 #31 0.880 - Building for production... #31 15.16 ERROR Failed to compile with 1 error5:14:12 AM #31 15.16 #31 15.16 error in ./src/components/ResourceDetail.vue?vue&type=script&lang=js& #31 15.16 #31 15.16 Module parse failed: Unexpected token (182:46) #31 15.16 File was processed with these loaders: #31 15.16 * ./node_modules/cache-loader/dist/cjs.js #31 15.16 * ./node_modules/thread-loader/dist/cjs.js #31 15.16 * ./node_modules/babel-loader/lib/index.js #31 15.16 * ./node_modules/cache-loader/dist/cjs.js #31 15.16 * ./node_modules/vue-loader/lib/index.js #31 15.16 You may need an additional loader to handle the result of these loaders. #31 15.16 | #31 15.16 | getResourceConfig(resourceID, model, isChild) { #31 15.16 > let configID = model.states[resourceID]?.config_id ? model.states[resourceID]?.config_id : resourceID.replace(/\[[^[\]]*\]/g, ""); #31 15.16 | let config; #31 15.16 | if (isChild) return { #31 15.16 #31 15.16 @ ./src/components/ResourceDetail.vue?vue&type=script&lang=js& 1:0-328 1:344-347 1:349-674 1:349-674 #31 15.16 @ ./src/components/ResourceDetail.vue #31 15.16 @ ./node_modules/cache-loader/dist/cjs.js??ref--13-0!./node_modules/thread-loader/dist/cjs.js!./node_modules/babel-loader/lib!./node_modules/cache-loader/dist/cjs.js??ref--1-0!./node_modules/vue-loader/lib??vue-loader-options!./src/App.vue?vue&type=script&lang=js& #31 15.16 @ ./src/App.vue?vue&type=script&lang=js& #31 15.16 @ ./src/App.vue #31 15.16 @ ./src/main.js #31 15.16 @ multi ./src/main.js #31 15.16 #31 15.16 ERROR Build failed with errors. ------ Dockerfile:12 -------------------- 10 | COPY ./ui/public ./public 11 | COPY ./ui/src ./src 12 | >>> RUN npm run build 13 | 14 | FROM --platform=$BUILDPLATFORM alpine:3.15 as terraform -------------------- error: failed to solve: process "/bin/sh -c npm run build" did not complete successfully: exit code: 1
Still getting this same error on npm build
This is great! Thank you @im2nguyen @pratikbin.
Looking forward to seeing this PR come live and building Docker images using GitHub Actions in a regular cadence. Happy to help with the testing.