git-sync
git-sync copied to clipboard
Do not retry on exechook failure
Is this possible do not retry command execution after sync if this command fail (return not zero exit code)? Could you please add this option?
The simple answer is to wrap it in a script that does real-command || true.
The simple answer is to wrap it in a script that does
real-command || true.
Yes i understand but i want also to see by git-sync log that command is actually fail.
Can you explain to me what is happening that it's OK to fail but not so OK that you want to bury it?
On Tue, Feb 11, 2025 at 11:27 PM alexeynl @.***> wrote:
The simple answer is to wrap it in a script that does real-command || true .
Yes i understand but i want also to see by git-sync log that command is actually fail.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/git-sync/issues/935#issuecomment-2652865203, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKWAVH7M66ZDEIZU6SXH7D2PLZXJAVCNFSM6AAAAABW64DWPWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDMNJSHA3DKMRQGM . You are receiving this because you commented.Message ID: @.***>
Can you explain to me what is happening that it's OK to fail but not so OK that you want to bury it? …
In my case --exechook command (script) must validate config that was successfuly synced. The result of validation must be sent or written to status file. When validation success everything works as expected, but if it fails my script return not zero code and git-sync try to validate config infinetly. I understand that i can change my validation script and always return zero code. But this break my validation scipt logic and looks like workaround.
It would be great to have an option to control this behavior and set number of retries if exechook command fails: from -1 (infinetly), 0 (no retries if fails), 1,2,3..n times.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten