Albert Zeyer
Albert Zeyer
Currently you pin to an old typeguard version. This is a problem for me, because I want to use it together with some other code which requires a newer typeguard...
Examples of post-processing: - Raw audio is stored in the HDFDataset, do feature extraction on-the-fly (but not in the network, but instead as part of the dataset). - Ogg is...
After quite a while of training (597 subepochs) with PyTorch backend, I got: ``` RETURNN starting up, version 1.20231119.003753+git.c230d140, date/time 2023-11-22-02-51-52 (UTC+0000), pid 2470397, cwd /work/asr4/zeyer/setups-data/combined/2021-05-31/work/i6_core/returnn/training/ReturnnTrainingJob.tNymT5UR0k6i/work, Python /work/tools/users/zeyer/py-envs/py3.11-torch2.1/bin/python3.11 ... PyTorch:...
ESPnet basically does it like this: - Sort the whole dataset. (The dataset could maybe be directly stored in a sorted way. This would speed up the random access later.)...
For single GPU training, without PyTorch DataLoader multiprocessing or without MultiProcDataset, the memory usage of the dataset is maybe not too much of a problem. However, it is not uncommon...
``` RETURNN starting up, version 1.20240117.113304+git.54097989, date/time 2024-01-17-23-15-11 (UTC+0000), pid 1130069, cwd /work/asr4/zeyer/setups-data/combined/20 21-05-31/work/i6_core/returnn/training/ReturnnTrainingJob.wmezXtjsvAck/work, Python /work/tools/users/zeyer/py-envs/py3.11-torch2.1/bin/python3.11 RETURNN command line options: ['/u/zeyer/setups/combined/2021-05-31/work/i6_core/returnn/training/ReturnnTrainingJob.wmezXtjsvAck/output/returnn.config'] Hostname: cn-284 Installed native_signal_handler.so. PyTorch: 2.1.0+cu121 (7bcf7da3a268b435777fe87c7794c382f444e86d) ( in...
For debugging, for dumping, but also as an alternative to `torch.compile`-support for the direct PyTorch backend (#1491), it could be useful to have another backend which outputs PyTorch code, instead...
For the first few steps, it could run without tracing/scripting, but then it could enable it and from then on use the Torch graph directly (very similar to TF computation...
Here I want to collect some things to be done to speed up eager-mode execution. Most of it did not really matter in graph-mode execution when those extra things are...
I'm not really sure whether that is possible because we have our own `Tensor` class which wraps around the `torch.Tensor`, and similarly all the PyTorch functions are wrapped inside RF....