django
django copied to clipboard
Refs #33862 -- Added workflow to run the ASV benchmarks for labeled PR.
I have added a workflow that runs the benchmarks in βsmithdc1/django-asv against a pull request when it is labeled with the label benchmark
. If the performance has not changed significantly, a pull request status message Benchmarking Result - BENCHMARKS NOT CHANGED SIGNIFICANTLY
is added, if the performance has decreased a pull request status message Benchmarking Result - SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. PERFORMANCE DECREASED
is added.
Hello @deepakdinesh1123! Thank you for your contribution πͺ
As it's your first contribution be sure to check out the patch review checklist.
If you're fixing a ticket from Trac make sure to set the "Has patch" flag and include a link to this PR in the ticket!
If you have any design or process questions then you can ask in the Django forum.
Welcome aboard β΅οΈ!
trigger the workflow in the
smithdc1/django-asv
repo
Since it required the creation of an ACCESS_TOKEN
and the usage of third-party action, after discussions it was decided to add the workflow to django\django
itself.
Add the
benchmark
label" to run..."
Should I edit the Wiki CI page on the website itself and submit the changes?
Should I edit the Wiki CI page on the website itself and submit the changes?
Let's wait. The label doesn't exist yet, and it'd be confusing, but you can draft up any notes and post them here, or ..., if you like.
... that we'd trigger the workflow in the
smithdc1/django-asv
repo ...
Yes -- That's how I had it in my head to start with too. However, we ran into access issues and needing to pass tokens about which I'm not sure about. There is this blog post which shows one approach but that seems rather complex.
... committing the results ...
So we can't commit the results with this approach, but I think that's ok. I think the question we're trying to answer with this patch is "in back to back runs, is a PR +ve or -ve". Storing the result of that run is not important, it'll be picked up in the daily run on the django-asv
repo.
Hey @deepakdinesh1123 β I added the label, and it seems to work so... +1
I'm not sure about that :thinking: in logs, I only see:
Β· No information stored about machine 'fv-az173-80'. I know about nothing.
Ah, yes... actually looking at the run. π
(I was looking at That it triggered only)
I tried to set up the machine details in ASV using several different methods but all of them displayed the message
No information stored about machine 'fv-az173-80'. I know about nothing.
Not sure how to avoid this though.
@deepakdinesh1123 π
Not sure how to avoid this though.
Given that stdout is going to out.txt
, this must be on stderr
yes? Can you add a 2> error.txt
and check that after the run here:
asv continuous --interleave-processes -a processes=2 --split --show-stderr 'HEAD^' 'HEAD'
β¦ and then we can disgard that if we're happy.
Beyond that, can we echo enough of the output to verify that the benchmarks ran correctly if we look at the logs, maybe with something like the total time as a sanity check? π€
Given that stdout is going to
out.txt
, this must be onstderr
yes?
The output of asv continuous --interleave-processes -a processes=2 --split --show-stderr 'HEAD^' 'HEAD'
is being piped to sed and is then being sent to out.txt
Β· No information stored about machine 'fv-az173-80'. I know about nothing.
is being displayed due to the command
asv machine --machine ubuntu-latest --yes
It is required before running asv benchmarks on a machine as it asks the user to enter details in interactive mode if not done. Should I redirect the output of this command to some other file so this is not printed on the screen?
the total time as a sanity check
ASV does not have any methods to measure the total time taken to run the benchmarks, Shall I use some other method( time
command) to record the time and then add it to the Github step Summary?
asv machine --machine ubuntu-latest --yes
It is required before running asv benchmarks on a machine as it asks the user to enter details in interactive mode if not done. Should I redirect the output of this command to some other file so this is not printed on the screen?
Yes, please. Let's just capture that. π
ASV does not have any methods to measure the total time taken to run the benchmarks...
Hmmm. Not sure: need to play but it would be nice to have something like the target time, and the actual time that led to the faster/slower decision (if that makes sense). π€ (Anything really: we get the status check, but nice to look to the logs for more detail if possible...) What do you think?
@deepakdinesh1123 Also β can you look at why the benchmark workflow isn't triggered for https://github.com/django/django/pull/15866/commits/1f7b44b586df8992d775462822a6f6e286bb33bf π€
OK... If I remove the label and re-add it again, the workflow is triggered.
Ideally it would run for pushes to the PR branch if the label is already applied too. I'm sure that's feasible with the right YAML. π€ Can you have a look? (Update: since the benchmarks take a while, we might not want them auto-run on push...)
Thanks
might not want them auto-run on push...
@carltongibson I have added the condition to run the benchmarks when pushes are made to pr with a given label, in order to avoid the auto-run of the benchmarks shall I modify the workflow such that it runs when a comment with the content benchmark is made?
I added a note to the CI Wiki page: https://code.djangoproject.com/wiki/CI#GitHubactions