xds: Add cluster endpoint watchers to depndency manager
This is part of A74 implementation which add CDS/EDS/DNS watchers to the dependency manager. It also adds a temporary flag that is disabled by default so that it is not used in the current RPC paths , but enabled in the dependency manager tests.
RELEASE NOTES: None
Codecov Report
:x: Patch coverage is 81.55620% with 64 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 83.35%. Comparing base (6ed8acb) to head (09842ae).
Additional details and impacted files
@@ Coverage Diff @@
## master #8744 +/- ##
==========================================
- Coverage 83.39% 83.35% -0.05%
==========================================
Files 419 419
Lines 32566 32861 +295
==========================================
+ Hits 27159 27390 +231
- Misses 4023 4067 +44
- Partials 1384 1404 +20
| Files with missing lines | Coverage Δ | |
|---|---|---|
| internal/xds/resolver/xds_resolver.go | 88.76% <100.00%> (+0.12%) |
:arrow_up: |
| internal/xds/xdsdepmgr/watch_service.go | 54.76% <54.76%> (-30.96%) |
:arrow_down: |
| internal/xds/xdsdepmgr/xds_dependency_manager.go | 84.15% <85.09%> (-0.75%) |
:arrow_down: |
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Can you also please check the codecov report here: https://github.com/grpc/grpc-go/pull/8744#issuecomment-3611618573 to see if there are any valid lines that are missing test coverage. Thanks.
I made a small refactoring commit to split the populateClusterConfigLocked method into smaller ones to handle updates for specific cluster types.
I also added the default case for the switch at the end. Please note that I changed it return true, nil, nil instead of false, nil, nil as it was doing earlier, because in this case, we have received the cluster resource. It just happens to be a type that we don't support. That shouldn't block us from sending an update if everything else in the tree is resolved (since we are anyways storing the error in the cluster config).