SWEM icon indicating copy to clipboard operation
SWEM copied to clipboard

Not find SWEM-hier

Open chenghuige opened this issue 7 years ago • 16 comments

Hi, seems not find hier encoder as paper mentioned. Very interested to see it :)

chenghuige avatar Jun 05 '18 04:06 chenghuige

Sure, I will merge the hierarchical pooling encoder into the model.py file soon.

dinghanshen avatar Jun 06 '18 17:06 dinghanshen

+1 Very interested to see it :)

cuteapi avatar Jun 12 '18 02:06 cuteapi

is there any progress on this issue?

ariwaranosai avatar Jun 27 '18 08:06 ariwaranosai

any progress?thanks @dinghanshen

hanhao0125 avatar Jun 28 '18 11:06 hanhao0125

still looking forward to this. thanks @dinghanshen

OliverKehl avatar Jul 04 '18 13:07 OliverKehl

still looking forward to this. thanks @dinghanshen

pemywei avatar Jul 05 '18 07:07 pemywei

Still looking forward to this. thanks @dinghanshen

airlsyn avatar Jul 05 '18 08:07 airlsyn

Still looking forward to this. thanks @dinghanshen

qichaotang avatar Jul 24 '18 08:07 qichaotang

Still looking forward to this. thanks @dinghanshen

LittleSummer114 avatar Jul 29 '18 11:07 LittleSummer114

please refer to the level-mean-max for hierarchical pooling: https://github.com/hanxiao/tf-nlp-blocks/blob/8f14a864a66f976857adc04a5f3f0797dd877731/nlp/pool_blocks.py#L26

It's part of a bigger project called tf-nlp-block

hanxiao avatar Sep 05 '18 11:09 hanxiao

Still looking forward to this. thanks @dinghanshen Or, could you tell what's the stride when setting local window size = 5?

windpls avatar Sep 17 '18 13:09 windpls

read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used.

beyondguo avatar Jul 02 '19 16:07 beyondguo

Hi The author gave me the swer-hier embedding, but I do not re-run it, I am also confused why so simple operation can achieve so good performance. However, our group recently finish some experiments, actually simple-operation can achieve comparable performance. For me, if you don not believe this result, you can forget this paper or you can re-run it to show whether you are right or not. Best Regards,

---Original--- From: "beyondguo"[email protected] Date: Wed, Jul 3, 2019 00:47 AM To: "dinghanshen/SWEM"[email protected]; Cc: "LittleSummer114"[email protected];"Comment"[email protected]; Subject: Re: [dinghanshen/SWEM] Not find SWEM-hier (#2)

read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

LittleSummer114 avatar Jul 02 '19 23:07 LittleSummer114

Hi The author gave me the swer-hier embedding, but I do not re-run it, I am also confused why so simple operation can achieve so good performance. However, our group recently finish some experiments, actually simple-operation can achieve comparable performance. For me, if you don not believe this result, you can forget this paper or you can re-run it to show whether you are right or not. Best Regards, ---Original--- From: "beyondguo"[email protected] Date: Wed, Jul 3, 2019 00:47 AM To: "dinghanshen/SWEM"[email protected]; Cc: "LittleSummer114"[email protected];"Comment"[email protected]; Subject: Re: [dinghanshen/SWEM] Not find SWEM-hier (#2) read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Hi, could you share the code to me? Thanks.

JayYip avatar Oct 23 '19 06:10 JayYip

Still looking forward to this. thanks

LLIKKE avatar Dec 17 '23 13:12 LLIKKE

感谢您的来信。

LittleSummer114 avatar Dec 17 '23 13:12 LittleSummer114