ScanSSD icon indicating copy to clipboard operation
ScanSSD copied to clipboard

Scanning Single Shot Detector for Math in Document Images

Results 15 ScanSSD issues
Sort by recently updated
recently updated
newest added

![3333](https://user-images.githubusercontent.com/41030145/99047764-a0226a00-25cf-11eb-9f93-5a098d335f8c.png) i have 16G memory and with nvidia-1660,i don't known if it enough? @MaliParag and: --dataset GTDB --dataset_root ~/data/GTDB/ --cuda True --visdom True --batch_size 1 --num_workers 4 --exp_name IOU512_iter1 --model_type...

Merge para scripts do gerador de equação. Foi implementado apenas para Torricelli, mas a estrutura para os demais passos, ou seja, gerar PDF, Imagem da página, e Imagem cortada (crop)...

Hello, I wonder if it is possible to distinguish embedded or displayed formulas from the detection result? Thanks a lot!

- 760 - 07-May-20 19:39:01 - Finished loading model! - 760 - 07-May-20 19:39:02 - **Test dataset size is 3162** - 760 - 07-May-20 19:39:03 - processing 8/3162 - 760...

I try to test my own image, but I don't know how to do it. I try this code: python3 test.py --dataset_root ./ --trained_model AMATH512_e1GTDB.pth --visual_threshold 0.25 --cuda True --exp_name...

Hi Nacriema, I have downloaded the pdf files according to your website and converted them into png format images. But I found that there are always some images whose size...

I'm getting this error when following the steps outlined in the README.md file for testing pre-trained model weights on `testing_data`.

Hello! I've tried to run ' detect.py' in conda virtual env in which I've installed PyTorch CPU version. It shows me the following error: ScanSSD-master>python detect.py Traceback (most recent call...

Hello everyone. I trying to run **test.py**. But I got an error by below: `RuntimeError: Error(s) in loading state_dict for DataParallel: size mismatch for module.loc.0.weight: copying a param with shape...

I have tried to download the data myself and parse it into a png image, but I always find that the amount and size of the data I parsed is...