rules_docker
rules_docker copied to clipboard
fix(nodejs): remove node binary from layers
Platform-specific nodejs binaries already come with nodejs_binary so we don't need to put them into another layer again. Also, these binaries do not work with --platforms thus when we build an image from OSX while targeting Linux, we get an image that contains both nodejs_darwin_amd64 and nodejs_linux_amd64 binaries.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: thesayyn
To complete the pull request process, please assign alex1545
You can assign the PR to them by writing /assign @alex1545 in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Hi @thesayyn and thank you for your PR.
Can you please run buildifier on the modified file as well as fix the failing //container:image_test test target that fails here.
@thesayyn Those node layers are meant to point to the platform specific ones, they're only aliases. If they don't work correctly then I think it's an issue with rules_nodejs not the integration here.
What happens if you move nodejs up to the top layer is that every time you make a code change will increase the size of that layer by 60MB~. So your incremental images will now always be 62MB~ instead of the current behaviour where it's like 1-2MB
The problem is not about the image size but sure it is a problem on slow networks. Also, in some environments such as circleci there is no layer caching which makes this a problem.
I am not sure if this is a problem of rules_nodejs though.
The size issue isn't just for slow networks, there's many other places where this would cause a regression:
- The disk storage size on repositories, most notibly private repositories.
- The update times of deployed applications
- The rebuild size of systems that are incremental (There's many other CI systems then circleci)
- Cache hits on a shared remote cache
- Storage size on workstations
There's probably more that I cant think of, but generally we just don't want larger and less incremental layers, I'd very much take that as a regression