arunpatala

Results 16 comments of arunpatala

ubuntu 18.04 aws ec2 env serverless install -u https://github.com/ryfeus/lambda-packs/tree/master/tensorflow/source -n tensorflow

I have faced a similar problem. I think this is because of new features supporting multiple inputs and targets. I have written a wrapper class that converts pytorch dataset to...

I set the arg optimizer_parameters=model.fc.parameters() in trainer.compile. But i guess it is being passed to optimizer initialization: line 133, in set_optimizer TypeError: __init__() got an unexpected keyword argument 'parameters'

added kwargs.pop('parameters',None) after line 128 and it works as expected

def set_optimizer(self, optimizer, **kwargs): if type(optimizer) is type or isinstance(optimizer, str): if 'parameters' in kwargs: parameters = kwargs['parameters'] kwargs.pop('parameters',None)

I have defaulted the tensor to Cuda. Changing that seems to have fixed the issue

I think train and val are using separate memory for input variables. May be they should be merged in a single loop

"Completed 76020 asserts in 180 tests with 0 failures and 0 errors" I have tried it on two machines both had the error. I was able to test the model...

have u tried running torch on mnist first. https://github.com/torch/demos/blob/master/train-a-digit-classifier/train-on-mnist.lua