Hongyi Wang

Results 8 comments of Hongyi Wang

@ayushman-shopin please refer to #70 to see if that solves your problem.

Hi @yanring, all the AWS stuff I mentioned in the guide is to help with getting private ips addresses of the instances (machines over a distributed cluster). Things work even...

Hi @thtb, in PyTorch, you won't need to write a `backward` function for your model, the [autograd](https://pytorch.org/docs/stable/autograd.html#module-torch.autograd) library will handle it for you. But if you're quoting the `backward` function...

Hey Kaiyu, @Stonesjtu. Thanks for pointing this out. There was an issue I faced when developing this prototype that `np.float32` can't be fully converted to `MPI.FLOAT` and vice versa. It...

Sorry for being confusing @Stonesjtu. The issue I mentioned was related to [this line](https://github.com/hwang595/ps_pytorch/blob/master/src/distributed_worker.py#L269), which I wrote for an old version where there wasn't any gradient compression strategy and each...

Actually, I think what's interesting is to add a `--half-precision` argument. To be more specific, when enabling `half-precision`, all computation in PyTorch side will be converted to `HalfTensor` (https://pytorch.org/docs/stable/tensors.html#torch.Tensor.half) and...

@GeKeShi sorry for this late response. i) For the first issue your reported. The error usually occurs when the sizes your local receiving buffer (on PS or worker nodes) are...

Hi @innerop, you're right that with respect to GPU support of this repo. The current version dose not have that functionality currently. One key reason is that it seems to...