modulus-sym icon indicating copy to clipboard operation
modulus-sym copied to clipboard

🚀[FEA]: Torch Jit for SympyToTorch

Open bridgesign opened this issue 8 months ago • 0 comments

Is this a new feature, an improvement, or a change to existing functionality?

Improvement

How would you describe the priority of this feature request

Medium

Please provide a clear description of problem you would like to solve.

I was trying to get some sympy equations to compile with torch jit, and found the issues in the file associated with the torch printer.

While going through how sympy creates lambda functions I was able to hack up the following solution to allow torch jit.

def sympy_torch_script(
    expr: sympy.Expr,
    keys: List[str],
    extra_funcs: Optional[Dict] = None,
) -> Callable:
    torch_expr = torch_lambdify(expr, keys, extra_funcs=extra_funcs)
    torch_expr.__module__ = "torch"
    filename = '<wrapped>-%s' % torch_expr.__code__.co_filename
    funclocals = {}
    namespace = {"Dict": Dict, "torch": torch, "func": torch_expr}
    funcname = "_wrapped"
    code = 'def %s(vars: Dict[str, torch.Tensor]) -> torch.Tensor:\n' % funcname
    code += '    return func('
    for key in keys:
        code += 'vars["%s"],' % key
    code += ')\n'
    c = compile(code, filename, 'exec')
    exec(c, namespace, funclocals)
    linecache.cache[filename] = (
        len(code),
        None,
        code.splitlines(keepends=True),
        filename,
    )
    func = funclocals[funcname]
    func.__module__ = "torch"
    return func

The code above creates a function that takes a dictionary of tensors, picks the relevant arguments and then runs the lambdified function. I had to make some minor changes with torch_lamdify but it works for both cpu and cuda. There are some issues when the output is a constant as jit outputs it as an int ot float.

Seems like a good idea?

Describe any alternatives you have considered

No response

Additional context

No response

### Tasks

bridgesign avatar Jun 06 '24 10:06 bridgesign