kyma icon indicating copy to clipboard operation
kyma copied to clipboard

APIRule does not re-check conflicting virtualservices

Open a-thaler opened this issue 4 years ago • 10 comments

Description I created an APIRule on a subdomain which was already taken by a virtualservice resource. The resulting status of the APIRule was very understandable like that:

status:
  APIRuleStatus:
    code: ERROR
    desc: 'Validation error: Attribute ".spec.service.host": This host is occupied
      by another Virtual Service'
  accessRuleStatus:
    code: SKIPPED
  lastProcessedTime: "2020-12-06T09:08:14Z"
  observedGeneration: 1
  virtualServiceStatus:
    code: SKIPPED

So I deleted the existing virtualservice as I simply forgot about it. Now I would expect that the APIRule gets healthy. Instead it stays like that forever.

Expected result APIRule gets healthy after conflicting virtualservice got resolved.

Actual result APIRule stays in the same status and does not get effective

Steps to reproduce Have an virtualservice with the same subdomain before you create an APIRule

Troubleshooting Re-create the APIRule after the virtualservice got deleted

a-thaler avatar Dec 07 '20 07:12 a-thaler

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.

kyma-stale-bot[bot] avatar May 13 '21 17:05 kyma-stale-bot[bot]

I am seeing the same problem. Any solutions guys?

kodepareek avatar Jun 03 '21 08:06 kodepareek

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.

kyma-stale-bot[bot] avatar Aug 02 '21 09:08 kyma-stale-bot[bot]

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.

kyma-stale-bot[bot] avatar Oct 03 '21 08:10 kyma-stale-bot[bot]

Since we upgraded our cluster to 1.24.5 we see this error quite often. In our previous Kyma installaton (Kyma 1.18) this was not the case. We saw it the first time when we had massive parallel deployments of multiple services and functions during initial system setup. Meanwhile we see this error when single deployments are processed.

tehret77 avatar Oct 15 '21 10:10 tehret77

we have this issue in kyma 1.24.x during application upgrades via helm. It is pretty inconvenient to manually backup apirule and remove apirule which will eventually remove virtualservice as well and restore it from file.

The way it could reproduce is that helm is trying to reapply the apirule when some of the apirule (rules or metadata changed) and it is doing that before current virtualservice gets deleted, so race condition met and conflicting virtualservice.

Thank you, Daniel Pop

dapus123 avatar Dec 08 '21 09:12 dapus123

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.

kyma-stale-bot[bot] avatar Feb 06 '22 10:02 kyma-stale-bot[bot]

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.

kyma-stale-bot[bot] avatar Apr 08 '22 08:04 kyma-stale-bot[bot]

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.

kyma-stale-bot[bot] avatar Jul 01 '22 11:07 kyma-stale-bot[bot]

This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Oct 14 '22 04:10 github-actions[bot]

This issue or PR has been automatically marked as stale due to the lack of recent activity. Thank you for your contributions.

This bot triages issues and PRs according to the following rules:

  • After 60d of inactivity, lifecycle/stale is applied
  • After 7d of inactivity since lifecycle/stale was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Close this issue or PR with /close

If you think that I work incorrectly, kindly raise an issue with the problem.

/lifecycle stale

kyma-bot avatar Dec 18 '22 10:12 kyma-bot

morning all. sorry for late response. i can confirm that is the case and we got it reproduced. thanks for your patience :)

strekm avatar Dec 19 '22 08:12 strekm