KooHong Jung

Results 3 issues of KooHong Jung

In the paper, it said that the hidden state size of GRU for inter-epoch level of raw data is 64. But in the config that you used in the github...

For this preprocessing code : % ensure the signal is calibrated to microvolts if(max(chan_data_eeg)

The paper said that the attention size is 64, but on the config file, the attention size is 32. Which one is correct?