Propagate annotations of the OTelCol and TA custom resources to the TA deployment resource
Description:
- Propagate annotations of the OTelCol and TA custom resources to the TA deployment resource. Note: This is not about pod annotations.
- Refactor: Rename the annotation function so it's similar to the ones for the OTelCol. To me it was quite confusing to see that the TA func is called
Annotationsbut meansPodAnnotations. I just renamed it without any further change for backwards compatibility. Yet it would be debatable if it's the pod annotation which should be put on the pdb and netpol. That's a different topic though.
Resolves: #4393
Testing:
- I didn't manage to get the tests running on my machine (envsetup failed).
go test ./internal/manifests/...succeeds. As the changes are limited to that package, I guessed it's enough to raise the PR and see the checks in the CI pipelines. - Tested the functionality manually in a
kindcluster
Documentation: None
Do you mind changing an existing e2e test and add an annotation?
E2E Test Results
34 files ±0 221 suites ±0 1h 59m 46s ⏱️ - 17m 34s 88 tests ±0 88 ✅ ±0 0 💤 ±0 0 ❌ ±0 225 runs ±0 225 ✅ ±0 0 💤 ±0 0 ❌ ±0
Results for commit 1bb74b0d. ± Comparison against base commit b9dcedc2.
:recycle: This comment has been updated with latest results.
Sure, I'll take a look next week once I'm back in the office
Added to E2E tests (both OTelCol and TA CR). The E2E test failed in the run before (targetallocator-metrics) was a time-out, succeeds when running locally (same k8s version)
E2E test report files from running locally
e2e-targetallocator.xml
<testsuites name="chainsaw-report" time="122.310130" tests="5">
<testsuite name="tests/e2e-targetallocator/targetallocator-metrics" tests="1" failures="0" errors="0" id="0" time="">
<testcase name="targetallocator-metrics" classname="" time="19.465504"></testcase>
</testsuite>
<testsuite name="tests/e2e-targetallocator/targetallocator-features" tests="1" failures="0" errors="0" id="0" time="">
<testcase name="targetallocator-features" classname="" time="58.856568"></testcase>
</testsuite>
<testsuite name="tests/e2e-targetallocator/targetallocator-namespace" tests="1" failures="0" errors="0" id="0" time="">
<testcase name="targetallocator-namespace" classname="" time="62.172559"></testcase>
</testsuite>
<testsuite name="tests/e2e-targetallocator/targetallocator-prometheuscr" tests="1" failures="0" errors="0" id="0" time="">
<testcase name="targetallocator-prometheuscr" classname="" time="84.477224"></testcase>
</testsuite>
<testsuite name="tests/e2e-targetallocator/targetallocator-kubernetessd" tests="1" failures="0" errors="0" id="0" time="">
<testcase name="targetallocator-kubernetessd" classname="" time="90.068652"></testcase>
</testsuite>
</testsuites>
e2e-targetallocator-cr.xml
<testsuites name="chainsaw-report" time="9.713641" tests="2">
<testsuite name="tests/e2e-targetallocator-cr/targetallocator-label" tests="1" failures="0" errors="0" id="0" time="">
<testcase name="targetallocator-label" classname="" time="1.035330"></testcase>
</testsuite>
<testsuite name="./tests/e2e-targetallocator-cr" tests="1" failures="0" errors="0" id="0" time="">
<testcase name="targetallocator-cr" classname="" time="4.327177"></testcase>
</testsuite>
</testsuites>
https://github.com/open-telemetry/opentelemetry-operator/actions/runs/18187651149/job/52051597858#step:8:213
I kindly request you to review again. Thanks in advance!
@swiatekm could you please review as well?
Looks like you have some test failures @mikel-jason, can you have a look?
Solved and rebased.
DONE 1849 tests, 3 skipped in 116.096s
FYI I found out I can run the tests with make test while go test as explained in the contrib guide gives me some /tmp/.../tls.crt no such file or directory error
https://github.com/open-telemetry/opentelemetry-operator/blob/4cbac0f37f0471c4acb1d47821d48909be832e19/CONTRIBUTING.md?plain=1#L110-L119
Thanks, commented.
From my side I'd leave this as is for now, close it if your opinion on the matter is settled.
FYI There are unit tests failing, but in general the pipeline runs more tests than make test locally. I didn't find a reason why the failing test is skipped locally. I don't invest time while the PR is on-hold. I'd appreciate a hint if you know why this happens.