leoluopy
leoluopy
@wlguan Hi , how much does the inference time decrease after your pruning ? any statistics ?
> I finally solved this problem by downgrading pytorch version to 0.4.0 and using ubuntu version 16.04. thanks for sharing the method , i created a new torch env using...
i guess the design for *2 -0.5 and *2 ) **2 is to get a larger backward optimization scope for xy and wh , is it right ? @derronqi
megaFace raw image seems not available publicly , following code can add the masks , but it may not be totally the same as FaceXZoo's implementation . ``` import math...
> Just run 'add_mask_all.py' with the provided 'facescrub2template_name.txt' and 'facescrub_face_info.txt'. thanks for reply , and i have tried the method you mentioned , it seems that the origin megaFace face...
hi, i am here again. tips for guys looking for this question. codes below can compare the eval difference by original weights and folded weights: ``` import os import torch...
if with_compactor=True , it's the unpruned model. if with_compactor=False, the weights are folded , meaning that pruned .
yes same acc ,and after folding there are lots of channels with zero weights which can be removed with no harm . see the paper for details .
mark , same problem , have you find a solution ? @Dhaizei
这是报错: 哪位大佬碰到过: transformers/training_args.py", line 1712, in __setattr__ raise FrozenInstanceError(f"cannot assign to field {name}") dataclasses.FrozenInstanceError: cannot assign to field generation_max_length ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0