terngrad
terngrad copied to clipboard
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
If i want to run terngrad on CIFAR10 dataset for distributed system (for example: 2 worker machine, 1 parameter server, i.e, 3 machines), can you please describe the steps to...
On this code: https://github.com/wenwei202/terngrad/blob/ec4f75e9a3a1e1c4b2e6494d830fbdfdd2e03ddc/terngrad/inception/bingrad_common.py#L100-L107 is it intentional you do not `continue` after line 101?
Can we install the official TensorFlow binary version and run your code based on it? Does that mean you changed the source code of TensorFlow? I find that your algorithm...
When I ran `cifar10_alexnet` on cifar10 dataset, I used terngrad and set `FLAGS.floating_grad_epoch=0` (leave other arguments default), achieved `top_1=86.04` after 300000 steps. And then I want to see original accuracy...
my pc is mac , can terngrad run with cpu only?
"Currently, distributed-node mode only supports 32bit gradients. It will take a while to hack the highly-encapsulated SyncReplicasOptimizer to integrate TernGrad. Keep updating" -----mentioned in the doc hi,wenwei, thanks for the...