sonack

Results 17 comments of sonack
trafficstars

> In the released code, we did not put the speedup module in the repo. You can refer to our MASTER paper for the speedup module. I expect the end2end...

> In the competition, we used our own internal tool (FastOCR) to implement our algorithm. In the mmocr framework, we did not implement it. If you fully understand the master...

> > > In the competition, we used our own internal tool (FastOCR) to implement our algorithm. In the mmocr framework, we did not implement it. If you fully understand...

Hi,@JiaquanYe @delveintodetail,我测试了一下这个[大模型](https://github.com/JiaquanYe/TableMASTER-mmocr/blob/master/configs/textrecog/master/table_master_ConcatLayer_ResnetExtract_Ranger_0721.py)的推理速度: 在max_len=500,遇到EOS不提前终止的情况下: 1. 没有去除Code line 42-43的冗余计算时,大概是11s/img; 2. 去除冗余计算后,大概是6s/img; 3. 再加上memory-cached inference的话,大概是4.5s/img。 请问一下在相同config模型下,你们内部实现的memory-cached inference的推理速度最快能到多少?我profile了一下目前的memory-cached inference的实现,主要瓶颈都在KQV的matrix计算上了,应该不是pytorch具体实现的问题。目前都是采用的原生Pytorch进行的计算,没有经过torchscript或者其他工程化操作。

> > Hi,@JiaquanYe @delveintodetail,我测试了一下这个[大模型](https://github.com/JiaquanYe/TableMASTER-mmocr/blob/master/configs/textrecog/master/table_master_ConcatLayer_ResnetExtract_Ranger_0721.py)的推理速度: > > 在max_len=500,遇到EOS不提前终止的情况下: > > > > 1. 没有去除Code line 42-43的冗余计算时,大概是11s/img; > > 2. 去除冗余计算后,大概是6s/img; > > 3. 再加上memory-cached inference的话,大概是4.5s/img。 > > > > 请问一下在相同config模型下,你们内部实现的memory-cached inference的推理速度最快能到多少?我profile了一下目前的memory-cached...

> @YanShuang17 > In my experiments, the initial lr is 1e-4, and multiply 0.8 for every 10k iterations. However, in my opinion, the lr parameter was not sensitive when training....

> @YanShuang17 > In my experiments, the initial lr is 1e-4, and multiply 0.8 for every 10k iterations. However, in my opinion, the lr parameter was not sensitive when training....