Xin Ma

Results 21 comments of Xin Ma
trafficstars

> torch.sum has 3 overrides, and I can't recognize which one is used. maybe another override with no dim argument is used. > > We will test the auto-convert-op feature...

> Sorry I can't reproduce the bug. If you are in a hurry, try to replace envs[env name]\lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py and run. Do you need me to provide the complete code so...

> Thank you, and I need a dependency list with versions. It fails before the speedup procedure caused by problems in dependencies. This is the packages from `pip list`. ```...

> The `model` parameter in `torch.jit.trace(model, im, strict=False)` should have one forward input. Thus, I add a new `dummy_forward` function. You need to add this function in `mmcls.models.classifiers.base.py`: ``` @auto_fp16(apply_to=('img',...

> With the speedup-auto-op feature the op aten::sum can be treated correctlly. But we have no replace for customed module 'RSoftmax', so it still can't be pruned. You can open...

> You can try it by downloading [the file](https://github.com/Louis-J/nni/blob/dev-speedup-auto-op/nni/compression/pytorch/speedup/jit_translate.py) and moving it to `[your_env_name]\lib\site-packages\nni\compression\pytorch\speedup\jit_translate.py`. Hi, I found a bug: my torch version is 1.10.1, but I suffer from a error:...

> > there is another problem dealing with rsoftmax, which can be avoided by replacing the '[your_env_name]\lib\site-packages\nni\common\graph_utils.py:374-380' from: > > ```python > if predecessor_node in nodes: > if predecessor_node not...

> ```python > customized_replace_func=customized_replace_func > ``` Thanks. I try it. Although it can model speed up successfully, the forward is wrong. ``` [2022-07-20 18:42:29] replace module (name: head.fc, op_type: Linear)...

> I can't reproduce it. I got a 'resnest50_8xb16_cifar10_sparsity_0.2.pth'. I test for several times and always got the same problem. The following link is my test code. It may help...

> tested in torch 1.10, 1.11 and 1.12 and all ended successfully. I tested it under torch 1.11, but I suffered form the same problem. And could you successfully test...