eventing
eventing copied to clipboard
Dataplane conformance tests
Part of #6796 Part of #5782
Proposed Changes
- Fixes unimplimented tests in dataplane
Pre-review Checklist
- [x] At least 80% unit test coverage
- [ ] E2E tests for any new behavior
- [ ] Docs PR for any user-facing impact
- [ ] Spec PR for any new API feature
- [ ] Conformance test for any change to the spec
Release Note
Fixed Dataplane conformance unimplemented test cases.
Docs
Hi @Vishal-Chdhry. Thanks for your PR.
I'm waiting for a knative member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@creydr @pierDipi Isn't https://github.com/knative/eventing/blob/1ff36e1b656d3ae8d09c6e97fadbc745ec661f48/test/rekt/features/broker/data_plane.go#L60-L63 and https://github.com/knative/eventing/blob/1ff36e1b656d3ae8d09c6e97fadbc745ec661f48/test/rekt/features/broker/data_plane.go#L112-L113 the same?
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 79.96%. Comparing base (
034bec9) to head (29d1058). Report is 425 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #6811 +/- ##
=======================================
Coverage 79.96% 79.96%
=======================================
Files 237 237
Lines 12354 12354
=======================================
Hits 9879 9879
Misses 1981 1981
Partials 494 494
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hi @pierDipi, can you please explain these to me? What does Events contained in delivery responses mean, and how does reply event work
https://github.com/knative/eventing/blob/1ff36e1b656d3ae8d09c6e97fadbc745ec661f48/test/rekt/features/broker/data_plane.go#L134-L141
And for the observability tests https://github.com/knative/eventing/blob/1ff36e1b656d3ae8d09c6e97fadbc745ec661f48/test/rekt/features/broker/data_plane.go#L154-L169 I have to get the number of events delivered using the brokerName, can you point me to where i can find the metrics functions?
Yes, correct
What does Events contained in delivery responses mean, and how does reply event work
This is explained shortly here: https://knative.dev/docs/eventing/ with:
Sinks can also be configured to respond to HTTP requests by sending a response event
For the Trigger's subscriber the "reply event" is the HTTP response that contains a new event and it is sent back by the Trigger to the associated Broker. We need to make sure that another trigger will receive the "reply event".
Now, provided what is the reply event
Should("Events contained in delivery responses that are malformed SHOULD be treated as if the event delivery had failed.",
todo).
May("The subscriber MAY receive a confirmation that a reply event was accepted by the Broker.",
todo).
Should("If the reply event was not accepted, the initial event SHOULD be redelivered to the subscriber.",
todo)
These 3 tests need to test all the cases, in particular:
- Events contained in delivery responses that are malformed SHOULD be treated as if the event delivery had failed.
this is the case where the reply event is malformed/not an actual cloudevent, in that case, we expect that the original event (the one in request, so not the reply event) will be resent to the Trigger's subscriber ( see comment on case 3).
- The subscriber MAY receive a confirmation that a reply event was accepted by the Broker.
I would leave this as todo for now, since we don't have defined a way of doing this.
- If the reply event was not accepted, the initial event SHOULD be redelivered to the subscriber.
This is basically case 1 + making sure that the delivery is retried when 1 happens, so we can have a single test for 3 and test case 1 as well (therefore we can remove case 1)
Thanks for doing this! Let me know if you need further information!
For observability tests I would recommend doing a separate PR and connect with @mgencur (on Slack or on the original GitHub issue) since Martin worked on some of those implementation specific tests in the past.
f.Stable("Conformance").
Should("The Broker SHOULD expose a variety of metrics*.",
todo).
// * including, but not limited to:
//- Number of malformed produce requests (400-level responses)
//- Number of accepted produce requests (200-level responses)
//- Number of events delivered
Should("Metrics SHOULD be enabled by default, with a configuration parameter included to disable them if desired.",
todo).
Should("Upon receiving an event with context attributes defined in the CloudEvents Distributed Tracing extension the Broker SHOULD preserve that trace header on delivery to subscribers and on reply events, unless the reply is sent with a different set of tracing attributes.",
todo).
Should("Forwarded trace headers SHOULD be updated with any intermediate spans emitted by the broker.",
todo).
Should("Spans emitted by the Broker SHOULD follow the OpenTelemetry Semantic Conventions for Messaging System*.",
todo)
We can only tests tracing related ones, in particular, we need to verify that:
Upon receiving an event with context attributes defined in the CloudEvents Distributed Tracing extension the Broker SHOULD preserve that trace header on delivery to subscribers and on reply events, unless the reply is sent with a different set of tracing attributes.
For this case, we need to verify that the Trigger's subscriber receives a traceparent header when the source/sender is configured to start a tracing context (eventshub as source should do that by default).
Forwarded trace headers SHOULD be updated with any intermediate spans emitted by the broker.
This should verify that the tracing backend contains the additional tracing spans in addition to the source and the subscriber ones
Spans emitted by the Broker SHOULD follow the OpenTelemetry Semantic Conventions for Messaging System*.
This should verify that there is at least one tracing span in the tracing span tree that contains the tags matching the Semantic Conventions for Messaging Systems
/ok-to-test
/assign @creydr
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Vishal-Chdhry Once this PR has been reviewed and has the lgtm label, please assign kvmware for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/test reconciler-tests
@Vishal-Chdhry: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| reconciler-tests_eventing_main | 29d1058753ea84d4d138cebdec7107a03ebb36e0 | link | true | /test reconciler-tests |
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
@creydr In some places, it is failing because of the error, pod is not running
wait.go:413: Pod test-inkspplu/source-ttdrjpyd is not running...
wait.go:413: Pod test-inkspplu/source-ffsjcstg is not running...
wait.go:413: Pod test-inkspplu/source-gizoauac is not running...
I need to check isReady() in setup phase, but feature.feature is not available in the tests (feature.T) is, how do I specify setup phases?
@creydr In some places, it is failing because of the error, pod is not running
wait.go:413: Pod test-inkspplu/source-ttdrjpyd is not running... wait.go:413: Pod test-inkspplu/source-ffsjcstg is not running... wait.go:413: Pod test-inkspplu/source-gizoauac is not running...I need to check
isReady()in setup phase, but feature.feature is not available in the tests (feature.T) is, how do I specify setup phases?
That's a good question, as we have a bit different setup for the conformance tests :thinking:
@matzew @pierDipi are you aware of a way, how to add a step e.g. to the Setup phase from inside of a data plane conformance tests. Maybe I am simply not seeing it ATM
@creydr In some places, it is failing because of the error, pod is not running
wait.go:413: Pod test-inkspplu/source-ttdrjpyd is not running... wait.go:413: Pod test-inkspplu/source-ffsjcstg is not running... wait.go:413: Pod test-inkspplu/source-gizoauac is not running...I need to check
isReady()in setup phase, but feature.feature is not available in the tests (feature.T) is, how do I specify setup phases?That's a good question, as we have a bit different setup for the conformance tests thinking @matzew @pierDipi are you aware of a way, how to add a step e.g. to the
Setupphase from inside of a data plane conformance tests. Maybe I am simply not seeing it ATM
hmm, no, there isn't, I was thinking to new rekt APIs for basically branching a feature to a "sub-feature" but I'm not too convinced, the other approach is to have separate tests instead of being "assertions / steps"
Hey @Vishal-Chdhry,
could you recheck on this? What do you think about having separate tests instead so you have feature.T available? Or any other ideas?
I will just convert all these tests to being seperate tests, that seems like the more straight forward approach
This Pull Request is stale because it has been open for 90 days with
no activity. It will automatically close after 30 more days of
inactivity. Reopen with /reopen. Mark as fresh by adding the
comment /remove-lifecycle stale.
/remove-lifecycle stale Hi @vishal-chdhry, any plans to recheck on this, or do you want to close this PR?
Oh I totally forgot about this 😅
I will try to fix this, thanks for the ping @creydr!
This Pull Request is stale because it has been open for 90 days with
no activity. It will automatically close after 30 more days of
inactivity. Reopen with /reopen. Mark as fresh by adding the
comment /remove-lifecycle stale.
@vishal-chdhry do let me know if your continuing this or else I'm happy to continue from here
This Pull Request is stale because it has been open for 90 days with
no activity. It will automatically close after 30 more days of
inactivity. Reopen with /reopen. Mark as fresh by adding the
comment /remove-lifecycle stale.