BERTopic icon indicating copy to clipboard operation
BERTopic copied to clipboard

the 6 steps of BERTopic

Open TalaN1993 opened this issue 1 year ago • 6 comments

Have you searched existing issues? 🔎

  • [X] I have searched and found no existing issues

Desribe the bug

Hello,

I have a question. According to the document, I understand that BERT-Topic consists of six steps, with the representation tuning step being optional. I have read many articles in a specific field that used BERTopic, but my question is why they don’t all include these five steps. For example, some articles only include embedding, dimension reduction, clustering, and weighting scheme (c-TF-IDEF). I’d like to know if each step can be omitted and whether using all five remaining steps is necessarily required?,

Reproduction

from bertopic import BERTopic

BERTopic Version

0.16.3

TalaN1993 avatar Nov 04 '24 23:11 TalaN1993

For example, some articles only include embedding, dimension reduction, clustering, and weighting scheme (c-TF-IDEF).

These are actually five steps:

  • Embedding
  • Dim reduction
  • Clustering
  • Tokenization
  • c-TF-IDF

Although tokenization isn't mentioned, it is definitely used.

Typically, you would see those five steps with the optional representation step. If you would want to remove a step, the only that you could potentially remove is the dimensionality reduction step. All other are needed.

Many papers just implement the basic BERTopic functionality and compare it with that, which is a shame considering the representation models often improve the output significantly. I can't say their reasoning, but I wished the representing step would be included more often.

MaartenGr avatar Nov 05 '24 11:11 MaartenGr

Thank you so much for your help and guidance.

TalaN1993 avatar Nov 05 '24 15:11 TalaN1993

Hello MaartenGr,

In my case, I used all six steps with three different respresentation models (gpt 3.5, MMR and KeyBert) with the same other five steps. I evaluated the result using OCTIS npmi and topic diversity., but the result was somewhat different from what I expected. Do you think it makes sense?,

with gpt 3.5 : (npmi_score: 0.1267 , diversity : 0.9851) with MMR : (npmi_score: 0.2625 , diversity : 0.7263) with KeyBERT: (npmi_score: 0.3027 , diversity : 0.6421)

I had intended for the NPMI value for gpt to be higher.

TalaN1993 avatar Nov 07 '24 18:11 TalaN1993

It may be worthwhile to do a deep-dive into how topic coherence (and diversity) metrics work. They assume we have a list of keywords as the main representation for topics. This is true for MMR and KeyBERT but not for GPT-3.5 since that only generates a single label and not a mixture of words.

MaartenGr avatar Nov 08 '24 07:11 MaartenGr

Hello MaartenGr, I have a question. I understand that in LDA, the input data is typically based on the Bag of Words (BoW) representation. My question is: if we change the vector representation from BoW to TF-IDF or SentenceTransformers and then feed it to the LDA model, does this approach make sense? I am looking for ways to improve the results of LDA.

TalaN1993 avatar Nov 29 '24 00:11 TalaN1993

@TalaN1993 LDA is quite a different method compared to BERTopic and I don't think it would work that easily with embeddings without any significant changes. I believe there is something called LDA2Vec or something similar that you could research.

MaartenGr avatar Nov 29 '24 06:11 MaartenGr