[ES] Add `Scope` field to DB Span
Which problem is this PR solving?
- Fixes a part of: #7034
Description of the changes
- Currently it is impossible to record scope information completely as there is no way to store scope attributes in ES DB Span. Therefore it is necessary to add a scope field in db span.
How was this change tested?
- Unit And integration tests
Checklist
- [x] I have read https://github.com/jaegertracing/jaeger/blob/master/CONTRIBUTING_GUIDELINES.md
- [x] I have signed all commits
- [x] I have added unit tests for the new functionality
- [x] I have run lint and test steps successfully
- for
jaeger:make lint test - for
jaeger-ui:npm run lintandnpm run test
- for
Codecov Report
:x: Patch coverage is 95.00000% with 3 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 96.09%. Comparing base (734e1be) to head (3ddaf4b).
:warning: Report is 322 commits behind head on main.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| ...torage/v2/elasticsearch/tracestore/from_dbmodel.go | 92.68% | 2 Missing and 1 partial :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## main #7039 +/- ##
==========================================
+ Coverage 96.04% 96.09% +0.04%
==========================================
Files 352 352
Lines 20821 20820 -1
==========================================
+ Hits 19998 20007 +9
+ Misses 620 613 -7
+ Partials 203 200 -3
| Flag | Coverage Δ | |
|---|---|---|
| badger_v1 | 9.96% <0.00%> (-0.01%) |
:arrow_down: |
| badger_v2 | 2.07% <0.00%> (-0.01%) |
:arrow_down: |
| cassandra-4.x-v1-manual | 14.99% <0.00%> (-0.01%) |
:arrow_down: |
| cassandra-4.x-v2-auto | 2.06% <0.00%> (-0.01%) |
:arrow_down: |
| cassandra-4.x-v2-manual | 2.06% <0.00%> (-0.01%) |
:arrow_down: |
| cassandra-5.x-v1-manual | 14.99% <0.00%> (-0.01%) |
:arrow_down: |
| cassandra-5.x-v2-auto | 2.06% <0.00%> (-0.01%) |
:arrow_down: |
| cassandra-5.x-v2-manual | 2.06% <0.00%> (-0.01%) |
:arrow_down: |
| elasticsearch-6.x-v1 | 19.98% <100.00%> (+0.03%) |
:arrow_up: |
| elasticsearch-7.x-v1 | 20.06% <100.00%> (+0.03%) |
:arrow_up: |
| elasticsearch-8.x-v1 | 20.23% <100.00%> (+0.03%) |
:arrow_up: |
| elasticsearch-8.x-v2 | 2.07% <0.00%> (-0.01%) |
:arrow_down: |
| grpc_v1 | 11.51% <0.00%> (-0.01%) |
:arrow_down: |
| grpc_v2 | 9.14% <0.00%> (-0.01%) |
:arrow_down: |
| kafka-3.x-v1 | 10.24% <0.00%> (-0.01%) |
:arrow_down: |
| kafka-3.x-v2 | 2.07% <0.00%> (-0.01%) |
:arrow_down: |
| memory_v2 | 2.07% <0.00%> (-0.01%) |
:arrow_down: |
| opensearch-1.x-v1 | 20.11% <100.00%> (+0.03%) |
:arrow_up: |
| opensearch-2.x-v1 | 20.11% <100.00%> (+0.03%) |
:arrow_up: |
| opensearch-2.x-v2 | 2.07% <0.00%> (-0.01%) |
:arrow_down: |
| tailsampling-processor | 0.56% <0.00%> (-0.01%) |
:arrow_down: |
| unittests | 94.85% <95.00%> (+0.04%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
@yurishkuro Good news! Now queries are not crashing no matter whether mappings are updated or not. This has become possible by using IgnoreUnmapped of nested queries. We can test this (I did) by copying only the reader and model changes to a new branch and running integration tests. They are passing without any error! Now taking about mapping changes then yes they are updated by jaeger restart. I have tested this by following steps:
- Checkout to the main branch and modify the bash script of E2E tests of ES so that storage is not teared down after the tests.
- Modify the tests so that indices and mappings are not cleared (Comment out cleanup functions)
- Run the integration tests!
- Curl the ES end-point to get the mappings.
curl -X GET "http://localhost:9200/_mapping?pretty"
- Now checkout to this branch
- Modify the integration tests by commenting out all the spanstore tests except those of
FindTraces - When these tests are running, curl the ES endpoint to see the mappings (you will see scope is added to the mapping).
Good news! Now queries are not crashing no matter whether mappings are updated or not. This has become possible by using IgnoreUnmapped of nested queries.
Indeed good news. However, the scope attributes are not being tested in the e2e tests today, we need to add that to the suite. It may require some refactoring because the existing e2e tests work off the v1 model which cannot adequately represent scope attributes
https://github.com/jaegertracing/jaeger/blob/5873d1ffa8541446f62781eb6a87f0823bed6e57/internal/storage/integration/integration.go#L348
We need to add the ability to read fixtures in OTLP and use v2 Storage API to write from the test (in the v2 e2e tests these writes are already automatically upgraded to v2 Storage API, but since the input is v1 we still can't test the scope attributes). This could be a separate prequel PR - ideally we would upgrade all fixtures to OTLP first, then it will be easier to extend fixtures to include resource and scope attributes.
We need to add the ability to read fixtures in OTLP and use v2 Storage API to write from the test
NB: this doesn't need to be a big bang, we can just upgrade the path to write via V2 API first and remove unnecessary translations on the way ... https://github.com/jaegertracing/jaeger/issues/7050
We need to add the ability to read fixtures in OTLP and use v2 Storage API to write from the test
NB: this doesn't need to be a big bang, we can just upgrade the path to write via V2 API first and remove unnecessary translations on the way ... #7050
@yurishkuro Then before that I think we need to implement the factory for ES, or should we directly use the v2 writer? I am thinking to first refactor the v1 factory, then implement the v2 factory and then finally upgrade the integration tests! Does this look fine?
I think we need to implement the factory for ES
well, refactoring e2e tests is a good chunk of changes by itself. It might be easier to finish v2 memory storage first and use it as a guinea pig, rather than introducing dependency on ES just for the e2e tests upgrade. It's also much easier/faster to run e2e tests for memory storage.
I think we need to implement the factory for ES
well, refactoring e2e tests is a good chunk of changes by itself. It might be easier to finish v2 memory storage first and use it as a guinea pig, rather than introducing dependency on ES just for the e2e tests upgrade. It's also much easier/faster to run e2e tests for memory storage.
Have a doubt over this! Currently we have implemented v2 APIs for two backends: ElasticSearch and Memory. The problem is both of them would give different traces (only structure wise), although no information loss is there but take an example, every Span is appended to a different resource span in ES but this is not so in memory. So how should we do testing now?
v2 API is also implemented for GRPC storage.
would give different traces (only structure wise)
we can apply some normalization function in the test before comparing with expected fixture, e.g. to split the whole payload such that each span has its own copy of resource & scope objects (even if they are duplicated).
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. You may re-open it if you need more time.
@Manik2708 is this still needed?
@Manik2708 is this still needed?
@yurishkuro Extremely sorry for not looking into this. But this can be completed once refactoring of e2e tests is done and we could add integration tests related to scope
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. You may re-open it if you need more time.
This pull request has been automatically closed due to inactivity. You may re-open it if you need more time. We really appreciate your contribution and we are sorry that this has not been completed.