cluster-api icon indicating copy to clipboard operation
cluster-api copied to clipboard

Report failures of periodic jobs to the cluster-api Slack channel

Open sbueringer opened this issue 1 year ago • 19 comments

I noticed that CAPO is reporting periodic test failures to Slack, e.g.: https://kubernetes.slack.com/archives/CFKJB65G9/p1713540048571589

I think think this is a great way to surface issues with CI (and also folks can directly start a thread based on a Slack comment like this)

This could be configured ~ like this: https://github.com/kubernetes/test-infra/blob/5d7e1db75dce28537ba5f17476882869d1b94b0a/config/jobs/kubernetes-sigs/cluster-api-provider-openstack/cluster-api-provider-openstack-periodics.yaml#L48-L55

What do you think?

sbueringer avatar Apr 26 '24 07:04 sbueringer

cc @chrischdi @fabriziopandini

sbueringer avatar Apr 26 '24 07:04 sbueringer

This issue is currently awaiting triage.

CAPI contributors will take a look as soon as possible, apply one of the triage/* labels and provide further guidance.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Apr 26 '24 07:04 k8s-ci-robot

Oh wow, yeah that would be a great thing. I just fear that it may pollute the channel too much. But we could try and fail fast by asking for feedback if it is too much later on in the community meeting or via a slack thread/poll.

chrischdi avatar Apr 26 '24 08:04 chrischdi

do we know if this respects testgrid-num-failures-to-alert? If so it could be great.

killianmuldoon avatar Apr 26 '24 09:04 killianmuldoon

I'm not sure if it respects that. We could try and rollback if it doesn't?

sbueringer avatar Apr 26 '24 09:04 sbueringer

If it still pollutes the channel too much after considering testgrid-num-failures-to-alert we have to focus more on CI :D

(I"m currently guessing that we would get one Slack message for every mail that we get today, but I don't know)

sbueringer avatar Apr 26 '24 09:04 sbueringer

One slack message per mail would be perfect - more would disrupt the channel

WDYT about enabling it for CAPV first?

killianmuldoon avatar Apr 26 '24 09:04 killianmuldoon

Also fine with making the change and rolling back if it doesn't work

killianmuldoon avatar Apr 26 '24 09:04 killianmuldoon

One slack message per mail would be perfect - more would disrupt the channel WDYT about enabling it for CAPV first?

Fine for me, we can also ask the OpenStack folks how spamy it is for them today (cc @mdbooth @lentzi90)

sbueringer avatar Apr 26 '24 09:04 sbueringer

For CAPO we get a message for every failure and email only after 2 failures in a row. I think it has been tolerable for us, but that indicates it does not check testgrid-num-failures-to-alert (at least the way we have it configured)

lentzi90 avatar Apr 26 '24 09:04 lentzi90

Hm okay, every failure is just too much. So we should probably take a closer look at the configuration / implementation. One message for every failure just doesn't make sense for the amount of tests/failures we have (the signal/noise ratio is just wrong)

sbueringer avatar Apr 26 '24 09:04 sbueringer

+1 to test this if we find a config reasonably noisy (but not too much noisy) cc @kubernetes-sigs/cluster-api-release-team

/priority backlog /kind feature

fabriziopandini avatar Apr 29 '24 11:04 fabriziopandini

+1 from my side too. Tagging CI lead @Sunnatillo I will add this to improvement tasks for v1.8 cycle. CI team can look into this one.

adilGhaffarDev avatar Apr 29 '24 12:04 adilGhaffarDev

Sounds great. I will take a look

Sunnatillo avatar Apr 30 '24 06:04 Sunnatillo

I guess this testgrid-num-failures-to-alert should help with the amount of the noise. If we set it, for example, to 5 we will be sure that we will receive messages about constantly failing tests. This makes the config to sent the alert after 5 continuous failures.

Sunnatillo avatar May 30 '24 13:05 Sunnatillo

/assign @Sunnatillo

Sunnatillo avatar May 30 '24 13:05 Sunnatillo

@Sunnatillo testgrid-num-failures-to-alert does not affect the slack messages for CAPO at least. Only emails are affected by that in my experience.

lentzi90 avatar May 31 '24 05:05 lentzi90

@Sunnatillo testgrid-num-failures-to-alert does not affect the slack messages for CAPO at least. Only emails are affected by that in my experience.

Thank you for the update. I will open the issue in test-infra, try to find the way to do it.

Sunnatillo avatar May 31 '24 09:05 Sunnatillo

I opened an issue regarding this in test-infra: https://github.com/kubernetes/test-infra/issues/32687

Sunnatillo avatar Jun 03 '24 07:06 Sunnatillo

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 01 '24 07:09 k8s-triage-robot

Maybe let's close this here until https://github.com/kubernetes-sigs/prow/issues/195 has been implemented? (which might take a very long time if nobody volunteers for it)

sbueringer avatar Sep 02 '24 09:09 sbueringer

As per comment above /close

fabriziopandini avatar Sep 04 '24 08:09 fabriziopandini

@fabriziopandini: Closing this issue.

In response to this:

As per comment above /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Sep 04 '24 08:09 k8s-ci-robot