Tom
Tom
Yes, you're right. I'll create a release notes file. In general, there should not be any incompatible changes, just bug fixes.
The recommended interface is the wds.DataPipeline interface. It's easier to use and easier to extend. The .compose(...) interface is slightly different and there is no source_ anymore. Conversion should be...
The current PyTorch DataLoader is complex and makes exact epochs in the distributed setting tricky. For large datasets, this is actually not much of an issue, since "epochs" aren't that...
OK, a few things. First, the BrokenPipeError is just being ignored; it doesn't cause the process to exit: Exception ignored in: [1] BrokenPipeError: [Errno 32] Broken pipe I'm not sure...
I'm glad it works and is fast. I have to add the __len__ method to Repeatedly. FWIW, torch_xla should not call len(loader); that's really a bug that needs to get...
Yes, the "pipe:..." creates a subprocess that is connected to Python via a UNIX pipe. The PyTorch workers read from such pipes. When the worker stops reading from the pipe...
Thanks for the report. The documentation is generated from runnable notebooks, so the code was working correctly at some point; however, the documentation notebooks are not run as testcases right...
Yes, thanks for catching this. I also just noticed this and fixed it. TODO: add test case
That's a good suggestion.
Thanks. I've changed it to `os.remove(self.tempname); I think that should fix it. I'll try to add a testcase.