Thomas Unterthiner

Results 33 comments of Thomas Unterthiner

Overall that sounds good. But how are you going to store the JSON? I didn't know HDF5 was meant/well suited for storing large strings? OTOH, I don't like the idea...

CUDA contexts are transferable between threads, so this is weird, unless the thread runs in a different process? (which afaik isn't the case for python threads). However, I'm a bit...

Yeah, that looks nice!

I don't like the abused indexing notation, its a bit too unintuitive for someone who doesn't know the codebase too well. I'd rather do something like ``` h = _h.get_stream_handler(streamid=1)...

Coming back to this: I like option 3 the best. The problem with option 4 is that it gets too wordy too quickly. Especially considering that you'll often want to...

How do I do that? The problem vanishes if we add ``` cuda_memory_pool.stop_holding() ``` To the end of the file. Weirdly, just calling `cuda_memory_pool.free_held()` is not enough. So I'm assuming...

This is the output with CUDA_TRACE=True ``` $ python3.4 test.py cuInit cuDeviceGetCount cuDeviceGet cuCtxCreate cuCtxGetDevice cuMemAlloc cuCtxGetDevice cuDeviceGetAttribute cuDeviceGetAttribute cuDeviceComputeCapability cuDeviceGetAttribute cuDeviceGetAttribute cuDeviceComputeCapability cuDeviceComputeCapability cuDeviceGetAttribute cuMemcpyHtoD cuCtxPopCurrent terminate called after...

For completeness' sake, `test.py` looks like this: ``` $ cat test.py import numpy as np import pycuda.autoinit import pycuda.gpuarray as gpu from pycuda.tools import DeviceMemoryPool cuda_memory_pool = DeviceMemoryPool() X =...