inferno
                                
                                 inferno copied to clipboard
                                
                                    inferno copied to clipboard
                            
                            
                            
                        Test results show same size for all stack frames
Taken from https://github.com/flamegraph-rs/flamegraph/issues/76

Steps to reproduce
- Clone https://github.com/jasonwilliams/boa
- Start the docker container with the image provided (i do this via vscode)
- Install perf-sudo apt-get install linux-perf
- Run cargo flamegraph --dev --bin boa_cli
- Open up the flamegraph.svg
I also get an error when I try other options. removing --dev or adding —freq i did get
undefined[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.006 MB perf.data ]
thread 'main' panicked at 'assertion failed: self.event_filter.is_some()', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/inferno-0.9.4/src/collapse/perf.rs:165:9
stack backtrace:
   0: backtrace::backtrace::libunwind::trace
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1: backtrace::backtrace::trace_unsynchronized
             at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2: std::sys_common::backtrace::_print_fmt
             at src/libstd/sys_common/backtrace.rs:84
   3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
             at src/libstd/sys_common/backtrace.rs:61
   4: core::fmt::write
             at src/libcore/fmt/mod.rs:1025
   5: std::io::Write::write_fmt
             at src/libstd/io/mod.rs:1426
   6: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:65
   7: std::sys_common::backtrace::print
             at src/libstd/sys_common/backtrace.rs:50
   8: std::panicking::default_hook::{{closure}}
             at src/libstd/panicking.rs:193
   9: std::panicking::default_hook
             at src/libstd/panicking.rs:210
  10: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:471
  11: std::panicking::begin_panic
  12: <T as inferno::collapse::Collapse>::collapse
  13: flamegraph::generate_flamegraph_for_workload
  14: cargo_flamegraph::main
  15: std::rt::lang_start::{{closure}}
  16: std::rt::lang_start_internal::{{closure}}
             at src/libstd/rt.rs:52
  17: std::panicking::try::do_call
             at src/libstd/panicking.rs:292
  18: __rust_maybe_catch_panic
             at src/libpanic_unwind/lib.rs:78
  19: std::panicking::try
             at src/libstd/panicking.rs:270
  20: std::panic::catch_unwind
             at src/libstd/panic.rs:394
  21: std::rt::lang_start_internal
             at src/libstd/rt.rs:51
  22: main
  23: __libc_start_main
  24: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Hmm, interesting. Could you post the output of perf script somewhere if it's not too large?
https://gist.github.com/jasonwilliams/4c028c5844370f2acc90e10df285d287
Hmm, this looks like you just have a single sample, in which case all the stack frames should be the same size?
Do you know why there’s just a single sample though? When I the same command on my Mac I have multiple samples and it looks better.
That I don't know -- seems like it is probably an issue with how cargo flamegraph collects samples.
Still not able to get past:
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.006 MB perf.data ]
thread 'main' panicked at 'assertion failed: self.event_filter.is_some()', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/inferno-0.
9.5/src/collapse/perf.rs:165:9
So, the crash there in inferno is unfortunate, but the underlying cause is still the same — the perf record is not producing any samples, so inferno has nothing to operate on. We can fix the assertion failure (we should probably just return Ok() instead), but that won't actually solve your problem :)
That’s ok, I think fixing the error may help anyone in future who comes across this too, maybe with a helpful message?
As for perf record not producing any samples I’ll take a look, I think this is related to running in Docker . I opened a new issue for this error, I’m guessing the answer to that is the same.
Do you know why perf wouldn’t produce any samples?
It actually looks like there was a deeper issue going on. Take a look at #168 if you're curious! Thanks for reporting it.
As for why you are only getting a single sample (which is lost due to one of the bugs fixed by #168), it's probably because the program doesn't run for long enough. perf is a sampling profiler, so it needs the program to run for a while for it to collect enough samples to produce a meaningful profile.
thank you for investigating!
I’ll follow https://gendignoux.com/blog/2019/11/09/profiling-rust-docker-perf.html as running the same command directly on my Mac it works much better, so I thinks there’s some docker limitation at play also. That page mentions some security permissions limiting how Perf work.