Tianqi Chen

Results 637 comments of Tianqi Chen

Could be possible, as what MXNet needed was quite minimum. Currently, it relies on https://github.com/dmlc/dmlc-core/tree/master/tracker to start a tracker(master process) then start slave setting env variable of master(including IP, ID...

Mxnet has its own internal memory pool, that retains memory for future arrays because cuda allocation is slow. So the memory goes back to the pool, but are not freed...

There are two factors in executor memory consumption. - The executor itself tries to retain and share memory between nodes without runtime re-allocation. - The memory sharing within an executor...

@immars mshadow-ps is a library that implements async copying and communication for GPU thread. - so you can view it as a GPU thread based PS library - The distributed...

I agree that adding coroutine support will solve it more generically, that is why it is listed in the C2 as an option. However, there is a tradeoff in here,...

## More elaboration on C3. Given that wasm itself is a stack machine, its stack must be presented somewhere, and depending on its implementation might be separated from the native...

@lachlansneff Yes, your understanding is correct(blocking to wasm, not blocking to vm). It is certainly an implementation issue. In the meanwhile, it is also part of the interface specifications problem....

I certainly share some of the opinions (e.g. asyncio is great overall when concurrency outweights other things :) On the other hand, it would be even better if we can...

It boils down to the category of the problem we are trying to solve. As we know there is no silver bullet to solve all the problems. For example, if...

Would be great to get more inputs from wasm vm communities. Related background: we want to bring deep learning to wasm native(wasmer/wasmtime), just like what we did [here for browsers](https://tvm.apache.org/2020/05/14/compiling-machine-learning-to-webassembly-and-webgpu),...