Junan007
Junan007
I have the same problem, but when i run online_demo/main.py to test, the result is correct.so it can ignore? I am doubtful about that.
I try another implementation of shift module from [here](https://github.com/open-mmlab/mmaction2/blob/master/mmaction/models/backbones/resnet_tsm.py#L40-L121) but it has the same warning too.
> @Junan007 I reproduce the online work, but it shows 1.7vid/s, Not only me show this slow speed. Can you accelerate using tvm? Yes, i test on the cpu(2.8G Quad-Core...
> @Junan007 I test it on NVIDIA TX2. I follow the official steps, but it shows 1.7vid/s.Do you know the reason? sorry, I don't have NVIDIA TX2, do you compile...
you can use tvm.runtime.enabled('cuda') to check it.
yes, maybe it can ignored, and i test on a jetson nano use tvm, only 0.7 vid/s, did you solve your problem?
I'm sure that it has used gpu resources. it can 17.5 Vid/s when use llvm only, I don't know why it's so slow when use cuda.
after fixed tophub error, it can reach 27.2 Vid/s when use cuda. i think i solve my problem.
tophub is a part of tvm, it will be auto download when compile module, and save to ~/.tvm/tophub, but it failed in my environment.
Yes, the train for online is same to offline.but it is different on test, online version need caching the last features to shift.