k8s-ecr-login-renew
k8s-ecr-login-renew copied to clipboard
feat(logging): zap logger, and refactor messages
Changes
- Added
zap
logger for logging. - We're using the development zap logger because it's pretty unstructured and not logging using JSON.
Reviewer(s) to verify
- Logging message reading, correctness.
Hi Bhautik, You're totally right, the current "logs" are not great. At the very least there should be a timestamp for every message, etc.
However, I'm not familiar with the zap library and would prefer to keep things as vanilla as possible. I'll try to find some time to get up to speed with this.
Hi there, Hope you are good. My primary goal for logging is at the end user can take the action, decision, or know what's happening. I tried log (stdlib) but it's not concurrent safe if you set prefix or other thing set. You can find benchmark (https://github.com/uber-go/zap).
Hi @nabsul Updated Dockerfile. --mount
use the cache go module directory cache if exist. Please review, Thanks.
Hi @nabsul Updated Dockerfile.
--mount
use the cache go module directory cache if exist. Please review, Thanks.
That's interesting, since we're starting with the same base image each time, will there ever be a cache directory? I don't follow how this could possible...
I tried log (stdlib) but it's not concurrent safe if you set prefix or other thing set
Since this tool never creates go routines or prefixes, is this really a problem?
I tried log (stdlib) but it's not concurrent safe if you set prefix or other thing set
Since this tool never creates go routines or prefixes, is this really a problem?
Yes it's allow to set prefix but isn't go routine safe.
Hi @nabsul Updated Dockerfile.
--mount
use the cache go module directory cache if exist. Please review, Thanks.That's interesting, since we're starting with the same base image each time, will there ever be a cache directory? I don't follow how this could possible...
--mount utilizes the cache of go dir if exist, else it will download. Honestly I don't know GH managed runner cache this or not. But if you've self hosted runner | locally building image this steps utilizes go dir cache.
Yes it's allow to set prefix but isn't go routine safe.
there are no go routines in this application...
So, just to be clear, I'm happy to continue discussing these things, but so far I'm not convinced of the need for adding this logging library. I just want to make sure that's clear, so you don't feel like you're wasting your time.
Honestly I don't know GH managed runner cache this or not.
Ah, I think I see where the confusion might be. If we were running go build
in the GH action itself, then this caching could work, with a few more tricks. You have to tell GH action to preserve the cache directory between runs. This is a good idea and could potentially speed up the build time for these, but the solution in this PR won't get us there.
It would either need build to binaries outside of the Dockerfile, in the GH action directly. The problem here is that I don't know how to do that for a multi-arch build (arm, x86, etc...).
The other option it to add caching on the docker images. This is pretty east to do and... there is a cache
option for docker/build-push-action
. This is probably the easier option.