torcheeg
torcheeg copied to clipboard
Ask about model DGCNN implementation
Hi, I saw that the model DGCNN was already implemented in the torchegg library with the SEED dataset:
dataset = SEEDDataset(io_path=f'./seed',
root_path='./Preprocessed_EEG',
offline_transform=transforms.BandDifferentialEntropy(band_dict={
"delta": [1, 4],
"theta": [4, 8],
"alpha": [8, 14],
"beta": [14, 31],
"gamma": [31, 49]
}),
online_transform=transforms.Compose([
transforms.ToTensor()
]),
label_transform=transforms.Compose([
transforms.Select('emotion'),
transforms.Lambda(lambda x: x + 1)
]))
model = DGCNN(in_channels=5, num_electrodes=62, hid_channels=32, num_layers=2, num_classes=2)
I would like to know whether the above implements precisely as the following paper, including data preprocessing and model architecture, or only model architecture (the above preprocessing steps are the only examples).
https://ieeexplore.ieee.org/abstract/document/8320798
Please help me clarify that. Thank you very much!
Hi, I update: The paper's authors investigate five kinds of features to evaluate the proposed EEG emotion recognition method, i.e., the differential entropy feature (DE), the power spectral density feature (PSD), the differential asymmetry feature (DASM), the rational asymmetry feature (RASM), and the differential caudality feature (DCAU). But in the torcheeg, it seems to have only 2 first features (DE & PSD). How about other features?
Hi, I saw that the model DGCNN was already implemented in the torchegg library with the SEED dataset:
dataset = SEEDDataset(io_path=f'./seed', root_path='./Preprocessed_EEG', offline_transform=transforms.BandDifferentialEntropy(band_dict={ "delta": [1, 4], "theta": [4, 8], "alpha": [8, 14], "beta": [14, 31], "gamma": [31, 49] }), online_transform=transforms.Compose([ transforms.ToTensor() ]), label_transform=transforms.Compose([ transforms.Select('emotion'), transforms.Lambda(lambda x: x + 1) ])) model = DGCNN(in_channels=5, num_electrodes=62, hid_channels=32, num_layers=2, num_classes=2)
I would like to know whether the above implements precisely as the following paper, including data preprocessing and model architecture, or only model architecture (the above preprocessing steps are the only examples).
https://ieeexplore.ieee.org/abstract/document/8320798
Please help me clarify that. Thank you very much!
Hi! @tiensu!
I have tried to replicate the results of the paper you mentioned. Concretely, I have tried to implement the leave-one-subject-out experiment (LOSO), but I could not get the same results as the ones reported in the paper.
Have you tried it as well? Thanks for your response!
I also train DGCNN in SEEDDataset for 50 epochs. During training, the accuracy rate in the validation set was very high, reaching 94.9%, but the accuracy rate in the test set was very low, only 0.627. I don't know why~
I also train DGCNN in SEEDDataset for 50 epochs. During training, the accuracy rate in the validation set was very high, reaching 94.9%, but the accuracy rate in the test set was very low, only 0.627. I don't know why~
What hyperparameters did you set to create the model? In my personal experience, I had to remove the BN layer to make it train, and normalise the data offline. But still I do not get the results reported in the paper...
Any comment, help suggestion is more than welcome! :D
font{
line-height: 1.6;
}
ul,ol{
padding-left: 20px;
list-style-position: inside;
}
Actually I didn't get a good result, it just show good accuracy in training, but in test period, the accuracy drops quickly. In your solution, you say you have to remove the BN layer to make it train, I think it is not suitable to get a good result. you can set a simple BN layer~
***@***.***
---- Replied Message ----
From
***@***.***>
Date
1/4/2024 18:12
To
***@***.***>
Cc
***@***.***>
,
***@***.***>
Subject
Re: [torcheeg/torcheeg] Ask about model DGCNN implementation (Issue #13)
I also train DGCNN in SEEDDataset for 50 epochs. During training, the accuracy rate in the validation set was very high, reaching 94.9%, but the accuracy rate in the test set was very low, only 0.627. I don't know why~
What hyperparameters did you set to create the model? In my personal experience, I had to remove the BN layer to make it train, and normalise the data offline. But still I do not get the results reported in the paper... Any comment, help suggestion is more than welcome! :D
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
font{ line-height: 1.6; } ul,ol{ padding-left: 20px; list-style-position: inside; } Actually I didn't get a good result, it just show good accuracy in training, but in test period, the accuracy drops quickly. In your solution, you say you have to remove the BN layer to make it train, I think it is not suitable to get a good result. you can set a simple BN layer~ @.*** … ---- Replied Message ---- From @.> Date 1/4/2024 18:12 To @.> Cc @.> , @.> Subject Re: [torcheeg/torcheeg] Ask about model DGCNN implementation (Issue #13) I also train DGCNN in SEEDDataset for 50 epochs. During training, the accuracy rate in the validation set was very high, reaching 94.9%, but the accuracy rate in the test set was very low, only 0.627. I don't know why~ What hyperparameters did you set to create the model? In my personal experience, I had to remove the BN layer to make it train, and normalise the data offline. But still I do not get the results reported in the paper... Any comment, help suggestion is more than welcome! :D —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
Could you email me your source code, so I can check it and see where can it be the problem?