Julian Samaroo
Julian Samaroo
Changing my mind about this after getting multi-GPU with DaggerGPU to work properly: we should *always* wrap inputs in a `Chunk`, potentially with the option to opt-out, because we don't...
Some proposed degrees of freedom that could benefit from user-defined implementations: - Stager - Chooses whether to stage input objects into Thunks, or to execute them with a special implementation...
@ViralBShah it's been a few years, but is there some example code that illustrates this, as well as a rough draft of what you'd like the API to look like?
I agree that the array interface could do to be separate from the scheduler in this package, since it's clearly a decent chunk of code that the scheduler is generally...
Expanding on this, it would be great if the scheduler could dynamically add new workers via Distributed whenever it believes that having extra workers would help decrease total runtime of...
I think @vchuravy was pointing out that because Distributed was originally designed for HPC clusters where startup is all at once, not all cluster managers will handle this well, and...
> Oh, and in case the above was a polite request for a contribution I'd be happy to help Not necessarily, I'm happy to do it as well (and the...
Generally I use `@everywhere using Package1, Package2, ...`, which works fine. Distributed's code-loading story isn't great right now, but it's what we've got.
I've been using `sum([length(Dagger.get_processors(OSProc(id))) for id in workers()])` to calculate total processors. You should filter the resuts of `get_processors` to only `Dagger.ThreadProc` instances to get the number of threads. `OSProc(id)`...
`Threads.@threads` doesn't appear to use multithreading when run on a thread other than 1 (which is what `ThreadProc` does): ``` 10> Threads.@spawn begin @show length(unique(count_threads())) end Task (runnable) @0x00007fe09e8a8be0 length(unique(count_threads()))...