Albert Zeyer

Results 300 issues of Albert Zeyer

https://github.com/rwth-i6/sisyphus/blob/ab073d37426b25aef65b89794065e9be3a834ee8/sisyphus/job_path.py#L219 `_sis_hash` considers `hash_overwrite`, while the others do not. I'm not sure if this is by intention. If this is fine, maybe some small comment would be nice.

What I measure (what is most relevant for me): startup time of the sis manager, up to the prompt. My current benchmark: [`i6_experiments.users.zeyer.experiments.chris_hybrid_2021.test_run`](https://github.com/rwth-i6/i6_experiments/tree/main/users/zeyer/experiments). This creates a graph of 526 jobs....

``` reloading plugin SublimeModelines.sublime_modelines Traceback (most recent call last): File "/Applications/Sublime Text.app/Contents/MacOS/sublime_plugin.py", line 109, in reload_plugin m = importlib.import_module(modulename) File "./python3.3/importlib/__init__.py", line 90, in import_module File "", line 1584, in...

Hey! Thank you for this implementation! I just wanted to say that we use your implementation and have done some changes, extensions and fixes to it. Unfortunately I don't really...

**Software Versions** * Python: 3.11 * OS: Debian 12.5 on Raspberry Pi * Kivy: 2.3 * Kivy installation method: pip **Describe the bug** After a scroll event, we got this...

E.g. in such code: ```python my_dict["hello"] ``` I would like that it adds `my_dict["hello"]` also to the printed variables. Just like `my_obj.hello` works already.

For variational noise (weight noise) or weight dropout or similar things, it would be very helpful to have gradient checkpointing where we can avoid that the weights are stored twice...

PyTorch

Distributed training, single node, 4 GPUs. ``` ... ep 33 train, step 10, ctc_4 2.259, ctc_8 1.880, ctc 1.840, num_seqs 10, max_size:time 237360, max_size:out-spatial 52, mem_usage:cuda:1 6.6GB, 0.600 sec/step ep...