edwardpwtsoi
edwardpwtsoi
Thank you for your reply, @iiSeymour. I could decode it when I am not using docker as well. And I had set the --oom-kill-disable while running the docker container, but...
_, pred = output.topk(maxk, 1, True, True) line 393 main.py 这一部分是不是出了错?topk的第二个return是indices 假如我继续运行 python main.py --config="cfgs/FeatherNetB-32.yaml" --resume ./our_pretrained_models/FeatherNetB_bs32/_47_best.pth.tar --val True --val-save True 我会得到这个报错 Traceback (most recent call last): File "main.py", line...
Hi @ghjeong12 I was trying to do similar thing you mentioned above I tried to add ```python tf.profiler.experimental.start('logdir') #inference part tf.profiler.experimental.stop() ``` but I got ```console 2020-07-21 07:44:50.141421: I tensorflow/core/profiler/internal/gpu/device_tracer.cc:223]...
> Hi, I don't see anything wrong with how you invoke the profiler. It should work. Are you sure that your inference code is using GPU? Do you see CPU...
> Where is your GPU located? Is it on the same machine that you launch the tensorboard? > […](#) Yes, they are in the same machine. It is an instance...
Hi Qiuminxu, Thanks for your reply. I checked again in the log while running the profiling. Here is the complete [log](https://drive.google.com/file/d/1PHCopOFaLf2lmM_Gy7gK65hmRvZam8Jl/view?usp=sharing) For output that related to profiler and seem related,...
Hi Qiumin, Thank you so much for your help. After I added the docker run option --privileged=true, the error gone. but unfortunately I still couldn't see anything on the profile...