Marcel Rieger

Results 48 comments of Marcel Rieger

That would indeed be helpful. Being the plotting expert, would you be able to look this @mafrahm ?

I think we have to look into this one again in detail. The only things we need to cache are the {cf,columnar}_dev venvs, but I think the updated CI does...

Oh, I might have confused things. The variable definition already allows setting `discrete_x`, it's just not picked up by the plotting yet, so the issue is not fully resolved yet.

Hey @cverstege , thanks for reporting. To be honest, the ``` OSError: [Errno 98] Address already in use ``` confuses me a bit. Could you check where exactly this is...

Would it make sense in your use case to enable `cache_task_completion` only in your local env, and have it disabled in remote jobs? E.g. via ```ini [luigi_worker] cache_task_completion: $ENV_IS_LOCAL ```...

@cverstege I finally had time to look deeper into this and I think I understand the problem now. However, it seems to have nothing to do with the task completion...

Ok, that's at least a bit reassuring. It's somewhat hard to debug this since it never re-appeared on my end. Would you be able to debug this further in case...

Ok, thanks for the additional checks :+1: Then I'm closing this one for now, but feel free to re-open if the debugging trace leads back to law.

Hi @HerrHorizontal , odd indeed. Are you sure your workflow didn't pick up a submission json file that was generated with the previous submission mode? Btw, for the time being,...

You can set this value globally in the config, or you put this into your htcondor workflow ```python def htcondor_create_job_manager(self, **kwargs): job_manager = super().htcondor_create_job_manager(**kwargs) job_manager.job_grouping_submit = True job_manager.chunk_size_submit = 0...