Kaiyu Shi

Results 45 comments of Kaiyu Shi

Rather than manually setting a threshold, I think we may sort the memory usages of all the processes and print the top-K processes (probably with an entry `others` summed up...

Can you try copy-paste these lines into a python source file and then runs it. ----- before edit Do you execute these codes in `python xxx.py`, `python` terminal, or `ipython`...

This error message is weird. Could you plz post the: - memlab version - pytorch version - the computation devices (GPU type / CPU) btw could you try adding a...

Hi Stas, Thanks for your detailed feature description. I would like to propose a sample to make sure I get the point. ### statement unrolling: Suppose I have such a...

Can you give an example of the failed case. I was wondering if the problem exists in the decorator style line profiler.

THX for reporting. I'll investigate the integration with pytorch lightning in this weekend. But in principle, the only thing need to be done is to add the forward function into...

It looks like our current implementation cannot profiling the detailed memory usage inside `nn.Module`. However you can work this around by simply defining a *dummy* container Module like: ```python class...

A common workflow is to profile top-down. Usually 2 or 3 `profile` should give you an overall memory consumption statistics.

This code runs on python3 only. Could you plz upload your error message. As for the num_layers, you may print the model directly to have an insight into it. There's...

Well the GRU version supports only `1 layer`. It's because the *CUDNN*'s GRU only gives the hidden states of the last layer, but this kind of contrasting needs all the...