maximilianigl

Results 13 comments of maximilianigl

It seems to not happen when I run with fewer parallel runs (30 instead of 50 or 40 on a server that supports up to 56 threads).

Hi, From what I understand your note, the behaviour you described was intentional and your change will be the reason for the lower performance and slower convergence, because now you...

So you not only retain the graph, but you also don't call `optimizer.zero_grad()` at every update step? If you don't want to use the current code, it's tricky and I'm...

Unfortunately not, sorry, I haven't worked with the codebase in a while. If I find something I'll let you know, but not sure when I'll have time to look into...

Those jobs are quite compute intensive, as its running many environments in parallel. I don't remember how long it took to train, but I'd guess on the order of about...

Same problem. For me it also leads to some strange behavior when trying to load a workspace with while the cursors is still focussing a file. In that case, hitting...

Just wondering if there are any updates on this? I'm running into the same issue on a slurm cluster.

For others running into this problem, I found this workaround for inside SLURM, no idea if it works on other clusters as well: ``` #SBATCH --signal=B:SIGTERM@60 # Send SIGTERM 60...

For my project I've hacked together an "encoder"/"decoder" step that encodes non-tensor data into ints and stores the values in a dictionary. That allowed me to use buffers and just...