wlruys

Results 22 comments of wlruys

After looking into this further, it seems mostly driven by a system/driver lock on ioctl preventing mallocs and frees in the same process from executing in parallel even if they...

_re: the side discussion above_ - yep, using a smaller stream of pools (so memory is reused before being cleared) or setting async mempool helps avoid cudaFree / cudaAlloc being...

Duplicate of https://github.com/ut-parla/Parla.py/issues/67 ?

It is mentioned indirectly in the first tutorial: "Notice that we do not directly create Parla tasks in the global scope. Instead, we define a main function and create our...

At the moment there is no easy way to fix this. To fix the capture semantics we would have to copy the specific variables needed from the globals array. This...

Nevermind, this is still broken.

I agree, this seems to have been added and not caught in https://github.com/ut-parla/Parla.py/pull/112. Thanks for raising this! I'll remove the check and run through the tests to make sure everything...

Seems like there's a few mutex conflicts for this case since it hasn't been tested in a while. We're seeing if we can resolve it.

I absolutely second this. I've made `__str__` and `__repr__` changes locally soo many times while debugging or profiling something. The Parla default doesn't say anything useful.

Main concern is that I think @sestephens73 had a bunch of reasons for moving resource allocation for mapping. Want to make sure we can resolve those before moving it back.