dd-trace-py
dd-trace-py copied to clipboard
fix(di): use enum value for evaluate_at property
We make sure that we get the value from the enum for the evaluate_at property so that proper validation and comparison can be performed. Currently, the timing value is stored as a plain string, whereas the logic that handles them expects one of the possible enum values, thus making the check fail always.
Checklist
- [x] PR author has checked that all the criteria below are met
- The PR description includes an overview of the change
- The PR description articulates the motivation for the change
- The change includes tests OR the PR description describes a testing strategy
- The PR description notes risks associated with the change, if any
- Newly-added code is easy to change
- The change follows the library release note guidelines
- The change includes or references documentation updates if necessary
- Backport labels are set (if applicable)
Reviewer Checklist
- [x] Reviewer has checked that all the criteria below are met
- Title is accurate
- All changes are related to the pull request's stated goal
- Avoids breaking API changes
- Testing strategy adequately addresses listed risks
- Newly-added code is easy to change
- Release note makes sense to a user of the library
- If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
- Backport labels are set in a manner that is consistent with the release branch maintenance policy
CODEOWNERS have been resolved as:
releasenotes/notes/fix-span-decoration-probe-timing-dc4ce0664fa645b9.yaml @DataDog/apm-python
ddtrace/debugging/_probe/remoteconfig.py @DataDog/debugger-python
Benchmarks
Benchmark execution time: 2024-10-21 12:15:44
Comparing candidate commit 663dea13a79d4d4363c08a1d030d11af96668dfb in PR branch fix/span-decoration-probe-timing with baseline commit 6d8ee31ead0d76adc2d1516f0f2990c6dd71cb26 in branch main.
Found 0 performance improvements and 0 performance regressions! Performance is the same for 365 metrics, 53 unstable metrics.