tracy icon indicating copy to clipboard operation
tracy copied to clipboard

TRACY_DUMP_ON_EXIT (Feature request)

Open gww-parity opened this issue 3 years ago • 3 comments

TRACY_NO_EXIT does not seem to scale for scenario when application is run multiple times (possibly even in multiple cores in parallel) in manner of "short lived-application".

What about "TRACY_DUMP_ON_EXIT" which would instead of keeping app open and waiting for data collection, perform dumping collected data into harddrive? (possibly with some fast compression).

Ideally filenames postfixed by PIDs+timestamp to make filenames unique for multiple invocations. Of course that would require some tool for combining results, as mentined in manual v0.7.4 chapter 7 that Tracy is single-process profiler, therefore remapping tids would be required to avoid collisions / or allocating so we could have timelines like available cpus, sharing multiple runs. ("tid" allocator algorithm).

gww-parity avatar Dec 29 '20 13:12 gww-parity

Dumping data collected in buffers should be relatively easy, but it's not enough. The server performs a number of queries to retrieve data from a running client (e.g. source location contents, string data, etc.), and this has to be accounted for. Doing this properly would require duplicating server functionality on the client (i.e. figure out what to save, keep track of what's already been handled, etc.), which surely will be error-prone, as everything would need to be properly synchronized between the client and server implementations.

wolfpld avatar Dec 29 '20 14:12 wolfpld

Absolutely, reimplementing big part of server in client seems like not right way to go.

Scalability wise, I would like to capture traces from many many invocations of tools called by (cargo) build system.

Therefore, challenges that I have with design of connecting to client is that they may have port collisions.

Maybe would it be possible to run Tracy in "collector" mode, so it would be like server listening of incoming connections from clients, accepting them, collecting data, dumping on disk, and accepting another again and again (with controlled level or parallelism by flag/parameter) ? That way there would be very little changes on client side required, and in Tracy GUI tool a lot of code could be reused as it has already data collection logic and saving to file logic, so it's about putting those two in a loop (eventually spawning in parallel threads). Still even without paralellism, just making it one after another in a loop would help for my particular use case (I can always enforce cargo to parallelism==1 level).

Would such design, approach sound more feasible?

gww-parity avatar Dec 30 '20 10:12 gww-parity

This request is quite similar to #72.

Therefore, challenges that I have with design of connecting to client is that they may have port collisions.

There should be no collisions, as Tracy will try listening on a number of ports, if the default one is already occupied:

https://github.com/wolfpld/tracy/blob/3d37c686c/client/TracyProfiler.cpp#L1333-L1341

Maybe would it be possible to run Tracy in "collector" mode, so it would be like server listening of incoming connections from clients, accepting them, collecting data, dumping on disk, and accepting another again and again (with controlled level or parallelism by flag/parameter) ? That way there would be very little changes on client side required, and in Tracy GUI tool a lot of code could be reused as it has already data collection logic and saving to file logic, so it's about putting those two in a loop (eventually spawning in parallel threads).

You should already be able to achieve that in a form of a shell script, which would spawn a number of capture utilities (UI-less), listening on different ports and saving data to separate output files.

wolfpld avatar Dec 30 '20 16:12 wolfpld