jaeger-ui
jaeger-ui copied to clipboard
Allow a tag to be displayed next to operation name
Which problem is this PR solving?
- There is a need to display additional information for each Span in a prominent position (next to operation name). For example, with the following
config.json
:
{
"opLabelTag": "http.status_code"
}
we get:
Short description of the changes
- Adds a configuration option
opLabelTag
which, when set, shows the value of the specifiedopLabelTag
Tag for a Span, if set, in parenthesis next to the operation name in the trace view.
Codecov Report
Attention: 14 lines
in your changes are missing coverage. Please review.
Comparison is base (
1ee0cf1
) 92.74% compared to head (ca1577c
) 96.57%. Report is 783 commits behind head on main.
:exclamation: Current head ca1577c differs from pull request most recent head e21c43c. Consider uploading reports for the commit e21c43c to get more accurate results
Additional details and impacted files
@@ Coverage Diff @@
## main #487 +/- ##
==========================================
+ Coverage 92.74% 96.57% +3.83%
==========================================
Files 193 254 +61
Lines 4672 7620 +2948
Branches 1126 1986 +860
==========================================
+ Hits 4333 7359 +3026
+ Misses 299 261 -38
+ Partials 40 0 -40
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
As I understand, we already allow mini-templating in the link patterns. Perhaps it's worth reusing it here for a bit more flexibility, e.g. instead of "opLabelTag": "http.status_code"
you might say
"opLabel": "#{http.method} #{http.status_code}"
Great idea!
With the latest version, the pattern you gave:
"opLabel": "#{http.method} #{http.status_code}"
gives:
It's limited to only tags and uses no caching i.e. it will do the regex+a bit more for every span row.
@everett980 is the lack of caching going to be a perf concern for large traces?
Yeah, I noticed some ‘cache’ around the links code, so I thought about mentioning it. Caching the pattern preprocessing would get rid of a regex pattern match per row.
This cache is a bit unusual, in the sense that it caches only if the same exact object is used as a key (see srctags
below). This means that it's used only when someone clicks to expand/collapse a tag section (e.g. process/tags/logs) of a Span multiple times. Maybe it's accelerating stuff when there are very long logs that need to be processed. I don't know.
it('caches correctly', () => {
const linkPatternsX = [
{
key: 'mySpecialKey',
url: 'http://example.com/?mySpecialKey=#{mySpecialKey2}',
text: 'special key link (#{mySpecialKey2})',
}].map(processLinkPattern);
const getLinks = createGetLinks(linkPatternsX, cache);
const pspan1 = { depth: 0, process: {}, tags: [{ key: 'mySpecialKey2', value: 'valueOfMyKey1' }] };
const pspan2 = { depth: 0, process: {}, tags: [{ key: 'mySpecialKey2', value: 'valueOfMyKey2' }] };
const srctags = [{key: 'mySpecialKey'}];
const span1 = { depth: 0, process: {}, references: [{refType: 'CHILD_OF', span: pspan1}], tags: srctags};
const span2 = { depth: 0, process: {}, references: [{refType: 'CHILD_OF', span: pspan2}], tags: srctags};
expect(span1.tags[0]).toEqual(span2.tags[0]);
const r1 = getLinks(span1, span1.tags, 0);
const r2 = getLinks(span2, span2.tags, 0);
expect(r1).toEqual([
{
url: 'http://example.com/?mySpecialKey=valueOfMyKey1',
text: 'special key link (valueOfMyKey1)',
},
]);
expect(r2).toEqual([
{
url: 'http://example.com/?mySpecialKey=valueOfMyKey2', // <- error, it's valueOfMyKey1
text: 'special key link (valueOfMyKey2)',
},
]);
});
In this case I could cache to save the regex but otherwise the string interpolation would have to happen for each row and it happens only once per tree view.
@everett980 any comments on this PR?
Re caching, I don't know if there's a way to defer the evaluation of the rules until the row becomes visible in the viewport.
Re caching, I don't know if there's a way to defer the evaluation of the rules until the row becomes visible in the viewport.
My thinking would be:
- We know there's no performance overhead if there's no pattern set
- For the bleeding-edge people who chose to use this feature, they should give us feedback if they found any uncomfortable latency with super-large traces and we will find something for them.
Agreed.
Hi, I can pick up this PR, rebase it to latest and apply the changes suggested by @everett980. Would you like me to do that?
That would be brilliant!
Hello, what is needed to move this PR forward?