Albert Zeyer

Results 1032 comments of Albert Zeyer

But that's what I do already? In `_on_scroll`, it calculates the current user velocity (`user_scroll_speed`), and then it calculates the target velocity based on the acceleration scheme (`target_scroll_speed`), and thus...

As you told me, before, you used a RETURNN version from 2024-07, where it was working fine.

What is the dataset config? What is the training config (distributed setting)?

I just pushed a simple fix for this. Can you check whether it works now?

Strangely, I now get this very frequently (always at the RWTH ITC). Nothing really changed in my setup. ``` ...ERROR: Unexpected bus error encountered in worker. This might be caused...

Note, searching for this error gives many results. E.g.: * https://github.com/ultralytics/yolov3/issues/283 * https://discuss.pytorch.org/t/training-crashes-due-to-insufficient-shared-memory-shm-nn-dataparallel/26396 * https://github.com/nianticlabs/simplerecon/issues/3 * https://discuss.pytorch.org/t/error-unexpected-bus-error-encountered-in-worker-this-might-be-caused-by-insufficient-shared-memory-shm/38719 Many solutions are about increasing the SHM size in Docker. But that does...

Maybe I comment here for public visibility: > MixingDataset should have the option to consider the sequence length while mixing I see your argumentation here, but I wonder whether this...

Btw, I have this idea in mind: The user controls the mixing ratio via partition_epoch in the sub datasets. E.g., to demonstrate this on an example: I have two sub...

This sounds like a bug in the dataset preparation pipeline? And there are also much more cases how it could be wrong, e.g. having `///` in it, or having `..`...

What is the status here? Was this just a bug in the dataset preparation pipeline, as I assumed? So then just fix the dataset preparation pipeline, and this here is...