Eric Jang
Eric Jang
For the Mini-ImageNet ResNet-based encoder, I believe the authors use mean pooling instead of max pooling, as done by this implementation. I am currently unable to reproduce the mini-imagenet results...
FFMpegVideoWriter default settings does not seem to guarantee that encoding is "-tune ultrafast", so it's quite possible that the user is sending frames to the named output pipe faster than...
Hello, Great work with this suite of packages! I came across this yesterday and have been playing around with the notebooks to learn more about rigid body dynamics. It appears...
The current normalizer transform when operating on GPU tensors uses cupy, which assumes device=0 unless otherwise stated. This results in a bug where trying to run ffcv pipelines with Normalize...
I have two .beton datasets with the same schema. how can I concatenate files or load two different beton files into one Loader instance?
Hello, Do you plan on having support for serving html pages + other static content? Or is that something to be implemented in a separate Middleware Module?
Hi, thanks for putting this awesome repo together. I'm new to the Node/Babel/ES6/Mocha/Webpack ecosystem, so please bear with my novice question: I added a file `wiggle/test/client.js` with the following contents:...
This is really cool! It looks like you were able to get good results by sampling `gen_z: np.random.uniform(-1., 1., size=(GENERATOR_BATCH,GENERATOR_SEED)).astype(np.float32)` on each train step without using the reparameterization trick, which...
## 🐛 Bug The baseline code in [this colab](https://colab.research.google.com/github/davidcpage/cifar10-fast/blob/master/bag_of_tricks.ipynb#scrollTo=7WeTS0FF4iQE) provides a Pytorch example running on a V100 GPU with CUDNN to train a CIFAR10 classifier in 75 seconds. ``` epoch...
## 🐛 Bug Calling `torch.cuda.memory_stats` on gfx900 GPU (Frontier Vega) or any of the methods in https://pytorch.org/docs/stable/cuda.html#memory-management results in 0s. ## To Reproduce Steps to reproduce the behavior: 1. Run...