im2markup icon indicating copy to clipboard operation
im2markup copied to clipboard

[regarding real dataset] Please respond

Open vyaslkv opened this issue 3 years ago • 18 comments

Hello,

I can understand we can't generalize unless we don't have the real different types of images and their ocr, we, can provide that dataset, to get accuracy as mathpix. I don't have the hardware to train so need your little help for that. Can you share your email id for that if possible?

vyaslkv avatar Jul 16 '20 13:07 vyaslkv

Thanks for your interest in our work, my email is [email protected]. One thing to note is that we only work on public datasets (or datasets that can be released later), such that the public can benefit from our research.

Alternatively, if you want to keep the dataset private, you can also consider cloud computing services such as Amazon EC2, Google GCE, Microsoft Azure, which provides GPU instances paid by the hour.

da03 avatar Jul 16 '20 16:07 da03

Thanks @da03 for the quick reply I am sending you an email for further discussion

vyaslkv avatar Jul 16 '20 17:07 vyaslkv

@da03 what is the machine configuration is required for 20k training images like RAM DISK GPU and how many hours will it take if we train on cpu and what will the difference when using GPU

vyaslkv avatar Jul 17 '20 10:07 vyaslkv

And how many training examples are required to get a decent result like the results which you have shown on your website

vyaslkv avatar Jul 17 '20 10:07 vyaslkv

or can you tell me what configuration I should at least use to train the model on nearly 20k images. I am asking so that I could try that configuration directly of AWS or else I will end up using either too less or too high and I will be wasting money (because of the hourly charge)

vyaslkv avatar Jul 17 '20 14:07 vyaslkv

Regarding hardware, I think it's almost impossible to train on CPU, it would probably take forever. For GPU training would take less than a day even using 100k images. On AWS any GPU configuration is probably ok since your dataset of 20k images is small.

Regarding dataset size, I think 20k is a bit small, combining it with the im2latex-100k might give some reasonable results, but ideally you might need 100k real images to train. Besides, are your images of roughly the same font size? If not, standard image normalization techniques (such as denoising, resizing to same font size) might produce better results.

da03 avatar Jul 17 '20 15:07 da03

btw, if you got a GPU instance, I would recommend using this dockerfile to save you the trouble of installing luaTorch: https://github.com/OpenNMT/OpenNMT/blob/master/Dockerfile

da03 avatar Jul 17 '20 15:07 da03

Thanks a lot @da03 for helping me out

vyaslkv avatar Jul 18 '20 11:07 vyaslkv

@da03 one last question I have, I don't have the latex, I have the ocr of the images will that work (like this (5+2sqrt3)/(7+4sqrt3) = a-b sqrt3). And I have 150k such images (and even more) will that work or do I need latex only

vyaslkv avatar Jul 18 '20 11:07 vyaslkv

Cool that will work if you do a proper tokenization: the label shall be something like "( 5 + 2 sqrt 3 ) / ( 7 + 4 sqrt 3 ) = a - b sqrt 3" (separated by blanks). The algorithm should work for whatever output format.

da03 avatar Jul 18 '20 12:07 da03

ok Thanks @da03 You are helping a lot

vyaslkv avatar Jul 18 '20 12:07 vyaslkv

Hello @da03 ,

I have one quick question how much disk space it will require for 150k training examples I took 250 GB of space but it got full during creating demo.train.1.pt like that (during onmt_preprocess) using the default parameters given in the doc

vyaslkv avatar Jul 19 '20 18:07 vyaslkv

That's surprising. What are the sizes of those images?

da03 avatar Jul 19 '20 20:07 da03

(187, 720, 3) (2448, 3264, 3) (2209, 1752, 3) (1275, 4160, 3) (3456, 4608, 3) (1821, 4657, 3) (226, 1080, 3) (388, 2458, 3) (3264, 2448, 3) (625, 4100, 3) (379, 2640, 3) (1011, 4110, 3) like this @da03

vyaslkv avatar Jul 20 '20 05:07 vyaslkv

How much disk space I need @da03 ? Any rough idea

vyaslkv avatar Jul 20 '20 05:07 vyaslkv

I am using Open-NMT python to do this should I use the main repo which is using lua

vyaslkv avatar Jul 20 '20 05:07 vyaslkv

@vyaslkv Have you had any progress on your data?

I agree that working towards a public model is important.

beevabeeva avatar Feb 08 '21 18:02 beevabeeva

@vyaslkv Sorry for the delay. The images you are using seem to be huge: for example, an image of resolution 3264 x 2448 has ~7M pixels, and if we use a dataset containing 10k such images (we need at least thousands of training instances to learn a reasonable model), it would take 280G (7M x 10k x 4). The dataset used in this repo im2latex-450k is much smaller, since the images are much smaller (they are mostly single math formulas), and we've downsampled them to make that even smaller in the preprocessing.

I think you need to crop your images to ONLY contain the useful parts, cutting off any paddings, and downsample them as much as you can (while we humans can still identify the formulas from the reduced resolutions).

da03 avatar Feb 08 '21 18:02 da03