kyma
kyma copied to clipboard
APIRule does not re-check conflicting virtualservices
Description I created an APIRule on a subdomain which was already taken by a virtualservice resource. The resulting status of the APIRule was very understandable like that:
status:
APIRuleStatus:
code: ERROR
desc: 'Validation error: Attribute ".spec.service.host": This host is occupied
by another Virtual Service'
accessRuleStatus:
code: SKIPPED
lastProcessedTime: "2020-12-06T09:08:14Z"
observedGeneration: 1
virtualServiceStatus:
code: SKIPPED
So I deleted the existing virtualservice as I simply forgot about it. Now I would expect that the APIRule gets healthy. Instead it stays like that forever.
Expected result APIRule gets healthy after conflicting virtualservice got resolved.
Actual result APIRule stays in the same status and does not get effective
Steps to reproduce Have an virtualservice with the same subdomain before you create an APIRule
Troubleshooting Re-create the APIRule after the virtualservice got deleted
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
I am seeing the same problem. Any solutions guys?
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
Since we upgraded our cluster to 1.24.5 we see this error quite often. In our previous Kyma installaton (Kyma 1.18) this was not the case. We saw it the first time when we had massive parallel deployments of multiple services and functions during initial system setup. Meanwhile we see this error when single deployments are processed.
we have this issue in kyma 1.24.x during application upgrades via helm. It is pretty inconvenient to manually backup apirule and remove apirule which will eventually remove virtualservice as well and restore it from file.
The way it could reproduce is that helm is trying to reapply the apirule when some of the apirule (rules or metadata changed) and it is doing that before current virtualservice gets deleted, so race condition met and conflicting virtualservice.
Thank you, Daniel Pop
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale due to the lack of recent activity. It will soon be closed if no further activity occurs. Thank you for your contributions.
This issue or PR has been automatically marked as stale due to the lack of recent activity. Thank you for your contributions.
This bot triages issues and PRs according to the following rules:
- After 60d of inactivity,
lifecycle/stale
is applied - After 7d of inactivity since
lifecycle/stale
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Close this issue or PR with
/close
If you think that I work incorrectly, kindly raise an issue with the problem.
/lifecycle stale
morning all. sorry for late response. i can confirm that is the case and we got it reproduced. thanks for your patience :)