m1k3
m1k3
> > using ic to print something is literally much more slower (like print character by character) > > this sounds like an issue, perhaps, outside icecream. behind the scene's...
same bug in latest (1.70.2)
have you solved this issue? The collab notebook doesn't work either because of Swish not exported to ONNX
found it, solution is here: model.set_swish(memory_efficient=False) see https://github.com/lukemelas/EfficientNet-PyTorch/issues/91
PR is failing because your CI is using an older version of cmake. Without newer version, Cuda and lapack are not correctly found and linked on conda environments. @quic-akhobare please...
Can't wait to test the new version with TF 2. Any tentative timeline? The updates in this PR are really for build purpose ie. they should work with TF2 too.
Yes absolutely. The programming model of node-opencl is just opencl, so it supports latest OpenCL specs, which WebCL couldn't. Adapting your library should be rather straigtforward.
yes I use cuda 10.2 on ubuntu 18.04.
After lots of testing, I think this might be a limitation of pytorch inference. Therefore, it might be better to disable multiple Gpus in devide_ids or use it to specify...