haoran1062
haoran1062
你要训练自己的数据才能出中文 默认是英文recognition模型
差不多,对于密集中文的分离做的比db好点,有些可以分割开,减少了粘连的情况
follow https://github.com/thefloweringash/iousbhiddriver-descriptor-override/issues/22
> @haoran1062 Can you kindly show one or two examples of what you are testing on? Have you tested the demo on CTW1500 which is also based on long text...
> @haoran1062 Thanks for sharing the results! Seems you are tring to train and test on a new dataset (document-like), and for such case, you need to train your own...
> @haoran1062 What you mentioned could be the problem. Since I haven't tried on much greater density data, I am not sure if it is the reason. I noticed that...
> @haoran1062 What you mentioned could be the problem. Since I haven't tried on much greater density data, I am not sure if it is the reason. I noticed that...
> @haoran1062 Thanks for providing the GT. It should be correct. I got what you mean now, and I guess it is the limitation of current method. Seems like detection...
> @haoran1062 The problem might be caused by the label assignment during the `BezierAlign` stage. > > You can try modifying the assignment strategies. For example: > > in `adet/modeling/poolers.py`,...
> @haoran1062 THX! I have modified the `_bezier_long_size` function to make it more reasonable. Please try the following: > > ``` > def _bezier_long_size(beziers): > beziers = beziers.tensor > def...