Brian
Brian
I can not force performance by using optimized TensorRT. Can someone tell my why? After optimizing the frozen graph, I get bigger model ???
@bezero I used TensorRT to optimize the frozen graph, but I did not get better speed for inference. I am currently working on person detection.
@luandaoduy96 There is ref link, we can discuss about this. If you have any idea, please share for us. https://www.facebook.com/groups/479604129450372/
@yil8 Do you train models on small dataset or big one? Which validation test on the competition?
@yil8 Thank you.
@yil8 I submitted my models using codalab. However, I did not see it on the leaderboard after 2 weeks. I also sent an email to Jirvin, but no response. Have...
> It took about 72 hours with 8 Nvidia Tesla V100 GPUs for a single run. @chenxin061 Which input size do you use?
@chenxin061 Could you give me some suggestion about which NAS model can use to achieve best accuracy on ImageNet? Good trade-off accuracy and speed.
Thank you!