julia
julia copied to clipboard
Wall-time/all tasks profiler
One limitation of sampling CPU/thread profiles, as is currently done in Julia, is that they primarily capture samples from CPU-intensive tasks.
If many tasks are performing IO or contending for concurrency primitives like semaphores, these tasks won’t appear in the profile, as they aren't scheduled on OS threads sampled by the profiler.
A wall-time profiler, like the one implemented in this PR, samples tasks regardless of OS thread scheduling. This enables profiling of IO-heavy tasks and detecting areas of heavy contention in the system.
Co-developed with @nickrobinson251.
An additional note: this kind of wall-time/all tasks profiler is also implemented in Go (and denoted as goroutine profiler there), so there is some precedent for this in other languages as well: https://github.com/felixge/fgprof.
@nickrobinson251 I can't assign you as reviewer... Feel free to assign yourself or post review comments otherwise.
I think this is related to https://github.com/JuliaLang/julia/pull/55103. Could the metrics here be useful in that too?
I think this is related to https://github.com/JuliaLang/julia/pull/55103. Could the metrics here be useful in that too?
For diagnosing excessive scheduling time? I can't immediately see how this PR would be useful for that.
For diagnosing excessive scheduling time
https://github.com/JuliaLang/julia/pull/55103 seems like a much more direct approach for doing so, at least.
Can you share some examples e.g. simple workloads you've tried this out on and what the profiles look like?
@vtjnash: IIRC your comments stem from the fact that a GC may get interleaved with jl_get_all_tasks_arraylist and invalidate the contents of live_tasks -- both by freeing unreachable tasks and by freeing the underlying buffers from mtarraylist_t that have been scheduled for freeing.
Task freeing happens in sweep_stack_pools and the freeing of mtarraylist_t buffers happens when finalizers run.
Perhaps we could:
- Hold the mutex during the whole
jl_profile_task-- to ensure a task we are about to profile doesn't get GC'd while we're collecting its backtrace, sincesweep_stack_poolswill be blocked because we're holding the lock. - Hold the mutex while we free
mtarraylist_tbuffers for the same reasons.
Any thoughts?
Can you share some examples e.g. simple workloads you've tried this out on and what the profiles look like?
Yes, here they are:
Workload 1: a_bunch_of_tasks_waiting_on_channel.jl
using Base.Threads
using Profile
using PProf
ch = Channel(1)
const N_SPAWNED_TASKS = (1 << 10)
const WAIT_TIME_NS = 10_000_000
function spawn_a_bunch_of_tasks_waiting_on_channel()
for i in 1:N_SPAWNED_TASKS
Threads.@spawn begin
take!(ch)
end
end
end
function busywait()
t0 = time_ns()
while true
if time_ns() - t0 > WAIT_TIME_NS
break
end
end
end
function main()
spawn_a_bunch_of_tasks_waiting_on_channel()
for i in 1:N_SPAWNED_TASKS
put!(ch, i)
busywait()
end
end
Profile.@profile_walltime main()
Expected results
Given that we have a lot more tasks waiting on a channel than tasks actually computing (spinning), we expect to see a large number of samples in channel-related functions. That seems to be the case here.
Workload 2: a_lot_of_short_lived_tasks.jl
using Base.Threads
using Profile
using PProf
const N_SPAWNED_TASKS = (1 << 16)
const WAIT_TIME_NS = 100_000
function spawn_a_bunch_of_short_lived_tasks()
for i in 1:N_SPAWNED_TASKS
Threads.@spawn begin
# Do nothing
end
end
end
function busywait()
t0 = time_ns()
while true
if time_ns() - t0 > WAIT_TIME_NS
break
end
end
end
function main()
spawn_a_bunch_of_short_lived_tasks()
for i in 1:N_SPAWNED_TASKS
busywait()
end
end
Profile.@profile_walltime main()
Expected results
Given that we have a lot of short lived tasks, we expect to get a large numbers of samples corresponding to "failed to sample task & capture backtrace". That seems to be the case here.
A few more workloads suggested by @NHDaly.
Workload 3: compute_heavy.jl
using Base.Threads
using Profile
using PProf
ch = Channel(1)
const MAX_ITERS = (1 << 22)
const N_TASKS = (1 << 12)
function spawn_a_task_waiting_on_channel()
Threads.@spawn begin
take!(ch)
end
end
function sum_of_sqrt()
sum_of_sqrt = 0.0
for i in 1:MAX_ITERS
sum_of_sqrt += sqrt(i)
end
return sum_of_sqrt
end
function spawn_a_bunch_of_compute_heavy_tasks()
Threads.@sync begin
for i in 1:N_TASKS
Threads.@spawn begin
sum_of_sqrt()
end
end
end
end
function main()
spawn_a_task_waiting_on_channel()
spawn_a_bunch_of_compute_heavy_tasks()
end
Profile.@profile_walltime main()
Expected results
We have a lot more compute-heavy tasks than sleeping tasks. We expect to see a lot of samples in sum_of_sqrt. That seems to be the case here.
Cool, thanks! 🎉
🤔 Is it expected that the currently-scheduled tasks seem to have their stacks starting at a different frame than the waiting tasks?
It looks like the executing tasks start with right with the function in the Task (##1()), while the waiting tasks start with start_task | jl_apply | then the Task function?
I can't decide if I think this is helpful or not. On the one hand, it's maybe nice to visually divide the scheduled vs sleeping tasks, but on the other hand i think it would make more sense to integrate the stacks together if they had most of their content shared.
My instinct is that this is not desirable, and we should figure out why they're different, and correct that.
Is it expected that the currently-scheduled tasks seem to have their stacks starting at a different frame than the waiting tasks?
Good question, I don't know. Will investigate this.
Wrote this short demo on how to use the profiler: https://github.com/d-netto/Wall-time-Profiler-Demo.
Feedback is welcome.
Perhaps we should add this to some documentation page in Julia itself.
Can you add this to the Profile docs and the NEWS file?
Can you add this to the Profile docs and the NEWS file?
Sounds good, will do. Thanks for the reviews.
This looks really interesting, I'm excited to use it!
To confirm, there's no way to use this with SIGUSR1/SIGINFO signalling, right? I've been trying to make sense of the code but am unsure -- am I correct that it's this line's [1] hardcoded 0/false which forces it to always be the old version?
Are there plans to make this modifiable using e.g. env variables like for heap snapshots or function references like peek_report.
[1] https://github.com/JuliaLang/julia/blob/fcf7ec081509c62967122e0949640e63a3d07571/src/signals-unix.c#L818
there's no way to use this with SIGUSR1/SIGINFO signalling, right?
Your understanding is correct.
Are there plans to make this modifiable using e.g. env variables like for heap snapshots or function references like peek_report.
i think that could be a good addition! Perhaps open a feature request issue for it? We might even want both e.g. I suppose an ENV variable could control whether to take a cpu profile or a wall-time profile, but if you want both it might be best to use peek_report, so we might want to add an easy, official way to do that, and document it. Right now i think you'd have to use the non-public function Profile.start_timer(true). Worth an issue to discuss, i think :)
Also, you might also be interested in https://github.com/JuliaLang/julia/pull/56043 (which depending on your use-case might be even better than a wall-time profile, since it'll print the backtrace for all tasks not just a sample)