StructEqTable-Deploy icon indicating copy to clipboard operation
StructEqTable-Deploy copied to clipboard

Is the dataset publicly available?

Open JasonKitty opened this issue 1 year ago • 8 comments

I can only find DocGenome dataset, is table recognition trained on this dataset?

Thank you!

JasonKitty avatar Aug 23 '24 01:08 JasonKitty

Yes. Our model is trained on the DocGenome dataset. Specifically, we extracted the table data from DocGenome to fine-tune our model.

Thank you for your interest in our work! Let me know if you have any further questions.

PrinceVictor avatar Aug 23 '24 03:08 PrinceVictor

Yes. Our model is trained on the DocGenome dataset. Specifically, we extracted the table data from DocGenome to fine-tune our model.

Thank you for your interest in our work! Let me know if you have any further questions.

Thank you for your reply! I have two more questions.

  1. The article mentions that table recognition and formula recognition both use the same model as Pix2Struct. Are these models trained separately for each task?

  2. For formula recognition, Mineru uses unimernet, which adds length embedding to the decoder. Is there a similar improvement applied in table recognition?

JasonKitty avatar Aug 23 '24 16:08 JasonKitty

Thank you for your questions.

  1. Separate models trained for table and formula recognition.
  2. Unlike unimernet, the is no length embedding added to decoder.

PrinceVictor avatar Aug 26 '24 03:08 PrinceVictor

Thank you for your questions.

  1. Separate models trained for table and formula recognition.
  2. Unlike unimernet, the is no length embedding added to decoder.

Thank u! One more question, the paper mentions tokenizer from nougat, is there an update here? Because I find that the two Tokenizers are not the same.

JasonKitty avatar Aug 26 '24 09:08 JasonKitty

We currently utilize the tokenizer from Pix2Struct, but we have expanded the vocabulary to support the Chinese language better.

PrinceVictor avatar Aug 30 '24 02:08 PrinceVictor

We currently utilize the tokenizer from Pix2Struct, but we have expanded the vocabulary to support the Chinese language better.

Table recognition is a token-intensive task, and I think using a dedicated tokenizer can streamline expression, improve inference speed and training performance.

JasonKitty avatar Aug 30 '24 06:08 JasonKitty

Thank you for your valuable suggestion. We will continue to improve the model for better performance.

PrinceVictor avatar Sep 02 '24 07:09 PrinceVictor

Thank you for your valuable suggestion. We will continue to improve the model for better performance.

Questions Regarding the Data Preparation.

  1. It is mentioned in the article that the training data consists of 500k articles. May I ask how many table image-Latex pairs were used for StructEqTable training?
  2. In table Latex, there are often many cross-references (e.g., \ref{}, \cite{}, \citep{}), and non-unique expressions (e.g., \textbf{} and \bf{}, different formatting controls that produce similar visual effects). Will such noise negatively affect the model's learning?
  3. Was any data cleaning performed on the table Latex?
  4. How were the table images annotated with the corresponding Latex text?
  5. What are the shortcomings of this model?

Looking forward to your reply. Thank you!

JasonKitty avatar Sep 04 '24 03:09 JasonKitty