DeepEEG icon indicating copy to clipboard operation
DeepEEG copied to clipboard

Stagnant training accuracy and loss?

Open ayay-netizen opened this issue 6 years ago • 5 comments

First off I wanted to say that I think the idea behind this project is amazing and thank you for creating it and releasing it.

I seem to struggle to be able to use it properly however.

First, I tried to run it locally on my windows x64 machine. I followed the instructions (for the muse_p3) example, used the example commands in the readme, and there were no errors, however the training accuracy and loss seemed to be relatively stagnant and never went too far above a low percentage. I then changed the model_type to a 3DCNN and subsequently an LSTM to see if those models would better be able to interpret the data, but got similar behavior. I had assumed the example data you included would show the model going to a high level of competency so I felt like something was off.

Assuming there must have been something wrong with my setup, I then went to the COLAB example. As I am interested in MUSE data, I followed the link for the MUSE jupyter notebook. On COLAB, just running all the cells by default I still got the same behavior. Validation accuracy barely went over 50%. Is this really intended? I also tried increasing the number of epochs to 350 in the colab notebook but again to no avail.

Please give me guidance on how I can get better results with this incredible pipeline.

Thank you!

ayay-netizen avatar Sep 14 '19 19:09 ayay-netizen

Thanks for your comment, unfortunately I have not yet found any model architecture that can reliably predict that data conditions, perhaps you will have more luck

On Sat., Sep. 14, 2019, 1:23 p.m. Ayay, [email protected] wrote:

First off I wanted to say that I think the idea behind this project is amazing and thank you for creating it and releasing it.

I seem to struggle to be able to use it properly however.

First, I tried to run it locally on my windows x64 machine. I followed the instructions (for the muse_p3) example, used the example commands in the readme, and there were no errors, however the training accuracy and loss seemed to be relatively stagnant and never went too far above a low percentage. I then changed the model_type to a 3DCNN and subsequently an LSTM to see if those models would better be able to interpret the data, but got similar behavior. I had assumed the example data you included would show the model going to a high level of competency so I felt like something was off.

Assuming there must have been something wrong with my setup, I then went to the COLAB example. As I am interested in MUSE data, I followed the link for the MUSE jupyter notebook. On COLAB, just running all the cells by default I still got the same behavior. Validation accuracy barely went over 50%. Is this really intended? I also tried increasing the number of epochs to 350 in the colab notebook but again to no avail.

Please give me guidance on how I can get better results with this incredible pipeline.

Thank you!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kylemath/DeepEEG/issues/31?email_source=notifications&email_token=AA36GFP4MBI23M2KE4OTSN3QJU24PA5CNFSM4IWYAR72YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HLMT2VQ, or mute the thread https://github.com/notifications/unsubscribe-auth/AA36GFMRGMZTSPVHIY52QNTQJU24PANCNFSM4IWYAR7Q .

kylemath avatar Sep 14 '19 21:09 kylemath

Oh, okay. So the program is not supposed to work on that example dataset? Could you point me to a dataset that demonstrates the network getting a decent result?

ayay-netizen avatar Sep 14 '19 22:09 ayay-netizen

Making a simulated data set will work

On Sat., Sep. 14, 2019, 4:39 p.m. Ayay, [email protected] wrote:

Oh, okay. So the program is not supposed to work on that example dataset? Could you point me to a dataset that demonstrates the network getting a decent result?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kylemath/DeepEEG/issues/31?email_source=notifications&email_token=AA36GFLWDRCDWDQRPCVV73DQJVR2PA5CNFSM4IWYAR72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6XFM5Q#issuecomment-531519094, or mute the thread https://github.com/notifications/unsubscribe-auth/AA36GFLPLHLSTEMAXKJ5N3LQJVR2PANCNFSM4IWYAR7Q .

kylemath avatar Sep 14 '19 22:09 kylemath

But is there any real data that it works on? For simulated data, it could just be finding patterns that only exist because of the way the simulation algorithms work.

ayay-netizen avatar Sep 14 '19 23:09 ayay-netizen

Yes sure it could be, there are plenty of eeg datasets available online you could try

On Sat., Sep. 14, 2019, 5:13 p.m. Ayay, [email protected] wrote:

But is there any real data that it works on? For simulated data, it could just be finding patterns that only exist because of the way the simulation algorithms work.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kylemath/DeepEEG/issues/31?email_source=notifications&email_token=AA36GFOO5EDXULJND5CJL6TQJVVZZA5CNFSM4IWYAR72YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6XFZSY#issuecomment-531520715, or mute the thread https://github.com/notifications/unsubscribe-auth/AA36GFJFN2ZFVDGFULAQ2HDQJVVZZANCNFSM4IWYAR7Q .

kylemath avatar Sep 14 '19 23:09 kylemath