Hossein Salahshoor Gavalan

Results 8 comments of Hossein Salahshoor Gavalan

Thank you for your answer. Actually, I compared the output of keybert when and without `candidates`. They were exactly the same! why is that? On the other hand, the `seed_words`...

> That is difficult to say without seeing the code but it might be as a result of the candidates being the same as the text's individual words. Could you...

@NarineK Hi, sorry for bothering you. Apparently you are the one who has enough information :) Could you please help me with this?

> @Hossein-1991 can you paste the error stack trace for me to confirm where the error is thrown? > > You are using `IntegratedGradients` but your input `tokens`, word ids,...

> I'm actually facing the same problem, @Hossein-1991 could you show me how you managed to convert your tensors into `long`. I also want to know whether or not you...

> Here is a minimum example with captum and sentence classification > > Hope this help : > > ``` > import torch > from transformers import AutoModelForSequenceClassification, AutoTokenizer >...

> @Hossein-1991 First of all, based on your code, you just converted your tensors to `long` for your model, not Captum: `y = model(ids = x.long(), masks = y.long())` So...

@aobo-y I have a long text (every text has about 2000 tokens) and want to extract key words (and phrases). Which method do you suggest?