APRCP-HRNet
APRCP-HRNet copied to clipboard
BnCfg in PrunePose_HrNet
Hi, I see bncfg param in purnpose_hrnet.py
but i do'nt see it. PLease explain!
For APRCP HRNet you can get our prtrain model in : https://drive.google.com/file/d/1-EXl9dSatzmUSGpWGuBFlcPPM9T8Gcfr/view?usp=drivesdk
For a purned model, there are two main file:
pruneXXX.txt // to build model XXXXXXXX.pth // weight of model
pruneXXX.txt is bncfg
Thank @vvhj , I test model w32_extreme and w32_best on RTX2080ti inference time of two model (~30ms) are same original model. it's no change!!
CPU is work, The effect of GPU is not obvious because of parallel computing, but I am trying to speed up GPU by pruning without skipping fusion layer (another reason mybe that RTX2080ti is too powerful).
Some new work will open after cvpr2021, and we will response you as soon as possible.
I understand, thank you very much.
why self.bnindex += 2 in class PosePurnHighResolutionNet?WHy isn't +1?
We prune 2 bn layer in one basic modules, so self.bnindex += 2. If you only prune 1 bn layer self.bnindex += 1. You can add a breakpoint on the "modules" to see the network structure.
Thx whj, I saw in the end, you measure the idx by adding the mean of ap+acc and the index of percent, why add the index of percent? I don't quite understand.
BTW, Have you ever taking not only the percent, but also the absolute sum value of the all the weights in each channel? It could be more reasonable.
I modified it with fuse layer pruning。this will reduce about 1/3 flops and the infertime will be 4/5 of the original。parameters could reach 1/4 of the src model size ,however it is not realtime
BTW, Have you ever taking not only the percent, but also the absolute sum value of the all the weights in each channel? It could be more reasonable.
Thx, BTW I will try it.