TFC-pretraining icon indicating copy to clipboard operation
TFC-pretraining copied to clipboard

Time-Frequency Consistency Loss is not utilized

Open xiaoyuan7 opened this issue 1 year ago • 10 comments

I noticed that the Time-Frequency Consistency Loss is not being utilized in your code. Could you please confirm whether this is intentional or not? And if it is not being used intentionally, could you please explain the reason behind it and its potential impact on the model's performance? image

xiaoyuan7 avatar Mar 28 '23 12:03 xiaoyuan7

Hello, I noticed this too. Therefore, I modified the loss function to use the time-frequency consistency loss, and the final experimental results obtained differed significantly from the paper. I hope the author can answer this doubt for us.

1057699668 avatar Mar 30 '23 02:03 1057699668

Can you get good results from the other three experiments? How to set the parameters?

yuyunannan avatar Mar 30 '23 02:03 yuyunannan

Can you get good results from the other three experiments? How to set the parameters?

Sorry, I can't reproduce the results of other three experiments either. I can only reproduce the one-to-one results from SleepEEG to Epilepsy with the original model parameter settings.

1057699668 avatar Mar 30 '23 02:03 1057699668

Can you get good results from the other three experiments? How to set the parameters?

Sorry, I can't reproduce the results of other three experiments either. I can only reproduce the one-to-one results from SleepEEG to Epilepsy with the original model parameter settings.

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

1057699668 avatar Mar 30 '23 02:03 1057699668

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

yuyunannan avatar Mar 30 '23 02:03 yuyunannan

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

Perhaps only the author can answer these questions for us.

1057699668 avatar Mar 30 '23 02:03 1057699668

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

have you solve the problem of subset?

zzj2404 avatar Apr 01 '23 11:04 zzj2404

  Sorry, I haven't solved the subset problem yet. Maybe the author only gave the correct settings for the SleepEEG → Epilepsy experiment.

------------------ 原始邮件 ------------------ 发件人: "mims-harvard/TFC-pretraining" @.>; 发送时间: 2023年4月1日(星期六) 晚上7:53 @.>; @.@.>; 主题: Re: [mims-harvard/TFC-pretraining] Time-Frequency Consistency Loss is not utilized (Issue #21)

I also tried pre-training and fine-tuning using other datasets, but it got bad performance.

I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad

have you solve the problem of subset?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

1057699668 avatar Apr 01 '23 13:04 1057699668

对不起,我还没有解决子集问题。也许作者只给出了 SleepEEG → Epilepsy 实验的正确设置。 …… ------------------ 原始邮件 ---------------- 发件人: "mims-harvard /TFC-预训练" @.>; 发送时间: 2023年4月1日(星期六)晚上7:53 @.>; @.@.>; 主题: Re: [mims-harvard/TFC-pretraining] Time-Frequency Consistency Loss is not utilized (Issue #21) I also tried pre-training and fine-tuning using other datasets, but it got bad performance. I have made many attempts, only sleepeeg experiment can get the result of approximate paper, the other results are bad have you solve the problem of subset? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

The author's code seems to have some problems, using the torch's API, TransformerEncoderLayer, in the backbone network, however it does not set the batch_first's to true, according to the author's data format, batch_size should be in the first place, and it does not seem reasonable to use TransformerEncoder for single channel time series input.

JohnLone00 avatar Apr 09 '23 08:04 JohnLone00

Yes this has also been mentioned in the other issue 19 as well. I agree that the single channel time-series input doesn't make sense, especially since the transformer is currently coded such that the "time" of the self-attention mechanism is actually the singular channel. In this way, the size of the self-attention mechanism is attending over is only 1.

maxxu05 avatar Apr 18 '23 23:04 maxxu05