Ada Böhm

Results 67 comments of Ada Böhm

It should not be the case, as the server is the only one who write into the file.

Because the file is corrupted at the beginning, my current hypothesis is that when worker is killed, the event is propagated to the server and the stream file is closed,...

Thank you for your log. It seems ok. However, I have found a problem in streaming code today, that may potentially cause this bug. I will try to fix it...

I spend some time on further analysis and it seems that what I have found is not a problem. I am now postponing the work on this as I want...

FYI I have finally finished the previous work in HQ and I am going to work on better streaming in HQ, that should fix also this issue.

I do not think this is a good idea. HQ commands should be a ground truth for task status. Stdout/Stderr is something produced by the task and we should not...

There are still errors that cannot be solved like this, e.g. task fails because its dependency fail. Or task fails because worker cannot create a stderr because of permissions. So...

When thinking more about it. If we promise output solely for this particular error, it should be ok.

Btw: How many workers were connected in your case? We are offering forgetting the tasks but not workers. These data are not big, but they still may grow. @Kobzol Maybe...

HQ is optimized for *millions* of tasks while having relatively *few* jobs. A job (in HQ terminology) is kind of user "namespace" for tasks. So it will have smaller memory...