Tianlin Liu
Tianlin Liu
> Tesla V100S-PCIE-32GB pytorch 1.8 ns equations(v=1e-3 T=50 N=1000) > > 500 0.001 100 0.5 torch.Size([1000, 64, 64, 40]) torch.Size([200, 64, 64, 40]) preprocessing finished, time used: 188.54924726858735 6558537 0...
@vwxyzjn Hi Costa, thanks for sharing the awesome reproduction result! I am trying to reproduce the OAI's result myself, and your re-implementation in PyTorch is a lifesaver. > On its...
Hi! @rtaori @YannDubs I am wondering if it is parallellize data when using `evaluate_from_model`. That is, with 8 GPUs, I want to generate 8 responses at the same time, each...