sphinx icon indicating copy to clipboard operation
sphinx copied to clipboard

Add configuration options to `duration` extension

Open user27182 opened this issue 10 months ago • 4 comments

Purpose

The duration extension is great but is not configurable. It currently only shows 5 slowest durations and only prints to the console.

This PR adds configuration options:

  1. duration_n_slowest to print any number of durations to console (not just 5)
  2. duration_print_slowest to enable/disable printing to console altogether
  3. duration_write_json to write all durations to a JSON file
  4. duration_print_total to print the total read time to console
  5. duration_limit to emit a warning for long durations above the limit

~No tests are yet included.~ I am a first time contributor and not familiar with dev tools and conventions here... I may need some extra help.

user27182 avatar Apr 06 '25 00:04 user27182

This is ready but I don't think I have the tests configured properly. When I run each test individually locally they each pass on their own. But running the whole suite causes some to fail. Is there some kind of interaction between tests?

user27182 avatar Apr 06 '25 02:04 user27182

Please add yourself to AUTHORS too. I think it's better to add each option as a new top-level option rather than as a dictionary.

AA-Turner avatar Apr 09 '25 19:04 AA-Turner

I'm going to modify the API a bit to better match pytest. pytest uses --durations=0 to show all durations, whereas this extention currently uses n_slowest=-1 to do the same, so will ~rename n_slowest to durations and~ change -1 to 0. Will rework the console formatting too to match pytest.

user27182 avatar Apr 10 '25 20:04 user27182

I've also added a new option to limit the duration to some value. Should be ready now.

CI is green except there are seemingly unrelated autodoc Python 3.14 errors...

user27182 avatar Apr 13 '25 07:04 user27182

If there's any chance of this being approved, please let me know and I'll work to make CI green again.

user27182 avatar Sep 29 '25 12:09 user27182

@user27182 please be mindful of test failures in GitHub's continuous integration, even when they're as a result of an update-merge -- this one seemed actionable, for example: https://github.com/sphinx-doc/sphinx/actions/runs/18664715659/job/53213261683

The issue there seems to have been #14109, and I've opened a pull request to resolve that.

jayaddison avatar Nov 27 '25 22:11 jayaddison

Thanks for the fix, and apologies for any problems caused by this PR.

While this failure indeed appears actionable in hindsight, there were also many other aspects of the CI that were causing intermittent failures unrelated to this PR (e.g. see https://github.com/sphinx-doc/sphinx/pull/13469#issuecomment-2781179060, https://github.com/sphinx-doc/sphinx/pull/13469#issuecomment-2799841710), which is in addition to the current issues sphinx is having with a separate flaky test (e.g. see this other recent merge with master commit where the failure is not caused this PR https://github.com/sphinx-doc/sphinx/actions/runs/19314188434/job/55241608094?pr=13469). This led me to simply (incorrectly) assume the failures were caused by some other Sphinx CI issue. I am also a new contributor unfamiliar with CI maintenance here, and have no ability to simply retry CI steps when there is a failure, and thus have limited ability to assess the flakiness in this regard.

Even the most recent commit had a flaky failure caused this PR (see https://github.com/sphinx-doc/sphinx/actions/runs/19723500767/job/56510331350?pr=13469), but this was quickly ignored/retried and merged anyway. And so it seems there were multiple opportunities to address this that were missed.

Again, thank you for the fix, I will be sure to look at CI failures more closely for future contributions.

user27182 avatar Nov 27 '25 23:11 user27182

Thank for understanding and responding @user27182 - we're trying to resolve one or two other flaky tests at the moment (including that unrelated one from the mainline branch), and yep, it does become much more difficult to catch genuine test failures when even a small number of the tests are sufficiently unreliable. If interested, overall progress/status for test reliability in the codebase is tracked at: #12191

jayaddison avatar Nov 27 '25 23:11 jayaddison