Michel Lang
Michel Lang
It would be possible to just not delete the job files (and let `sweepRegistry()` handle this) or to introduce an additional option to turn this on or off. i tend...
Thanks for the detailed report. I'll include a configuration setting to deal with time outs in the next release.
It looks like the scheduler is simply overburdened in your setups. As @HenrikBengtsson already suggested, you should make use of chunking if you have many jobs (by setting the `nbrOfWorkers`...
> @mllg, not sure if you suggested that in your comment, and not sure if it makes a difference for all schedulers, but do you think the overall load on...
The files batchtools writes are usually very small. The bottleneck you are experiencing is probably your network filesystem having trouble with the number of files. A different serialization would not...
I still don't get what the problem is here. Are we talking about some million global variables, or a few very large ones?
Ok then "bundling" them into a single file will not help. The bottleneck is either CPU for compression or IO for writing to the file system. If it is IO...
> Because that's a low-hanging fruit that I can imagine @mllg could implement as an option in batchtools without much work. Exactly. In my experience gzip compression (`compress = "gz"`)...
Oh wait, gzip is the default.
To be more constructive, and to help with your problem: Regardless of the compression, communication is expensive. If possible, avoid globals and avoid passing large objects to functions for parallelization...