OCTIS icon indicating copy to clipboard operation
OCTIS copied to clipboard

Loading unprocessed corpus documents with CTM and Optimizer

Open lfmatosm opened this issue 3 years ago • 4 comments

  • OCTIS version: 1.10.0
  • Python version: 3.7.6
  • Operating System: Ubuntu 20.04 LTS

Description

I've asked this at #29, but decided to open a new issue because this is a more specific scenario. So, here it is:

Hi @silviatti. So, if I understand correctly, currently there's no way to load the unprocessed corpus documents on OCTIS' CTM while using its optimizer, in a manner similar to the one done on standalone CTM's README?

Originally posted by @lffloyd in https://github.com/MIND-Lab/OCTIS/issues/29#issuecomment-974875277

What I Did

I gave a look at the docs.

lfmatosm avatar Nov 29 '21 23:11 lfmatosm

Thanks for open the specific issue, because I had lost the question. Yes, I confirm that there's currently no way to load the unpreprocessed corpus. As mentioned before, this would require to rethink how we pre-process the corpus and the format of the pre-processed corpus. This is currently a .tsv file with no header. The first two columns are mandatory, and they contain the pre-processed text and the partition of the dataset to which the document belong. Then we may have an additional column representing the label associated with the doc.

A possibility could be to add an additional column representing the unpreprocessed text. This could be mandatory (although it's not necessary if one doesn't use CTM) or this could be optional. In case it's optional, this can create some confusion (how can we recognize that a specific column represents the unpreprocessed text, the labels, etc?), unless we provide a header to the .tsv file.

Happy to discuss if you want. Unfortunately my time to dedicate to this project has been reduced lately. So I am be slow to respond. However I think OCTIS can be useful for the community and I'm trying to keep it alive :)

silviatti avatar Dec 09 '21 09:12 silviatti

following up on this cause I stumbled on the same issue (I think) and want to double-check I understand correctly.

I need to do hyperparameter optimization + model comparison for multiple CTMs, and I want to pass unpreprocessed text to the transformer part of the pipeline, while passing processed text to the neural topic model.

It seems like at the moment this is not supported here, I have to stick to manually trying different HP combinations and computing metrics through https://github.com/MilaNLProc/contextualized-topic-models, correct?

Amazing work, by the way 🙏

rbroc avatar Sep 09 '22 15:09 rbroc

Thanks Roberta! :)

yes, that is correct.

My suggestion is to first try hyperparameter configurations that "usually" work well. You can find some reference values in these papers:

  • https://aclanthology.org/2021.acl-short.96/
  • https://aclanthology.org/2021.ranlp-1.157/
  • https://arxiv.org/pdf/2202.07631.pdf

Moreover, make sure you select an appropriate pre-trained model for generating the contextualized representations of the documents. In this paper we noticed that this has an impact on the results. And also the pre-processing is quite important. It seems CTM works better with smaller vocabularies.

Hope it helps :)

Silvia

silviatti avatar Sep 09 '22 16:09 silviatti

thanks for the super quick reply and the pointers! :)

rbroc avatar Sep 13 '22 11:09 rbroc