Niklas Siemer
Niklas Siemer
This would include the FileHDFio and the DataContainer as well as the HasGroups concept? The ProjectHDFio would stay in base? I am open to such a change in the infrastructure.
If I understood correctly, the idea is to ease sharing of pyiron data structures, i.e. to create non-pyiron-managed hdf files which can be reloaded by others. I would image a...
That behavior is intentional. The maximum runtime for a specific queue is configured by the cluster administrator using slurm. For all our queues this is set to (actually already long)...
The fixed runtime is toxic for the scheduler... I performed a lot of AIMD calculations during my PhD and the maximum run_time I had was something in the order of...
I just had a look at the queue adapter and `check_queue_parameters` returns the given run-time except it exceeds the maximum run time of the queue (settings file).
What I would change is to 'logger.warn' instead of most of the times silently changing the configuration. And if we set `run_time=None` always the largest possible value is used. I...
> Don't change time limits so lightly! It will cause a lot of trouble for people who rely on the current default of 4 days! > In my experience, it...
Wouldn't it be reasonable to move this solution upstream to `FileHDFio`?
I would like this solution. Then we could define a new base class `ExternalExecutableJob` and define all the methods related to read/write of input files etc. there.
I reached the proposed status in the linked PR (via `pr.list_publications('dict', category=1)`). However, there are quite some remaining issues/TODOs with the current implementation: * the publication dictionary lives on our...