pytorchviz icon indicating copy to clipboard operation
pytorchviz copied to clipboard

Add variable names to nodes

Open Gurkenglas opened this issue 3 years ago • 5 comments

This is neat and by solving an NP-hard problem in my head I can tell which node corresponds to a given other tensor in my code. Could you add the names of each tensor to its node in the graph? Perhaps using https://pypi.org/project/varname/ or by modifying Tensor.__init__ to extract variable names from declarations in the traceback, then saving them in grad_fn.

Gurkenglas avatar May 16 '21 16:05 Gurkenglas

Hi, I am not familiar with the varname package. But it doesn't seem to be doing what you want here? Or at least I can't manage to get it to return the name from other scopes that are not direct parents. Would you have a code sample showing how this would work?

albanD avatar May 17 '21 13:05 albanD

Completely untested:

def nameTensors(): # All tensors defined from here on out will carry a name around in their grad_fn.
  oldinit = torch.Tensor.__init__
  def newinit(self, *args, **kwargs):
    self.oldinit(*args, **kwargs)
    self._grad_fn.name = varname.varname(ignore=torch)
  torch.Tensor.__init__ = newinit

Gurkenglas avatar May 17 '21 14:05 Gurkenglas

The problem is that most of the operations happen in c++ and the python Tensor.__init__ is not actually called :/

albanD avatar May 17 '21 15:05 albanD

Hmm. How about something like this, then? Even more untested, if that's even possible.

def nameTensors(module): # Wraps module (presumably torch) to have every function name every returned unnamed tensor.
  def wrap(func):
    def wrapped(*args, **kwargs):
      result = func(*args, **kwargs)
      if isinstance(result, Tensor):
        if not hasattr(result._grad_fn, 'name'):
          result._grad_fn.name = varname.varname(ignore=torch)
      return result
    return wrapped
  for name, func in module.__dict__.iteritems():
    if callable(func):
      module.__dict__[name] = wrap(func)
  return module

Usage: import nameTensors(torch)

Gurkenglas avatar May 18 '21 12:05 Gurkenglas

In support of this, adding variable names to nodes would be helpful to annotate exactly which tensors in the code are being saved for backwards

zou3519 avatar Jun 08 '21 13:06 zou3519