robintibor

Results 123 comments of robintibor

See https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.23730 -> ConvNet Training-> Cropped Training for original motivation > The cropped training strategy uses crops, that is, sliding input windows within the trial, which leads to many more...

based on all of this how about: * Trial * (Compute) Window -> we use compute_window at some public function calls to give a strong hint at the beginning, internally...

So we could still go through some namings this sprint like the `n_` logic for cardinalities etc... if we want, but in a backward-compatible way via deprecation. Feel unsure about...

See https://github.com/braindecode/braindecode/issues/28 also

We still didn't do this I think we should set it as default in https://github.com/braindecode/braindecode/blob/49b770ef5226c64c28a1bdea6c6dcfde2567a736/braindecode/classifier.py#L45-L46 and https://github.com/braindecode/braindecode/blob/master/braindecode/regressor.py#L44-L45

So set similar to `iterator_train__shuffle` You may have to adjust acceptance tests, update the numbers there, make it easy for yourself by using the existing testcode and printing new values...

Hi, thanks for your interest! so there is code to reproduce the HGD results here: https://gist.github.com/robintibor/6a95a85088e651392c1bb4f912d1528e#file-run_and_train_amp_grad-py-L242-L256 As you may notice, this is using cropped training. For trialwise/noncropped training, the current...

Hm so hyperparams might be scattered among those files: https://github.com/robintibor/braindevel/blob/21f58aa74fdd2a3b03830c950b7ab14d44979045/braindecode/configs/experiments/paper/bci_competition/cnt/deep_4_cnt_net_car.yaml / https://github.com/robintibor/braindevel/blob/21f58aa74fdd2a3b03830c950b7ab14d44979045/braindecode/configs/experiments/4sec_movements/cnt/cnt_4_layers_simple.yaml https://github.com/robintibor/braindevel/blob/21f58aa74fdd2a3b03830c950b7ab14d44979045/braindecode/configs/experiments/4sec_movements/cnt/defaults.yaml Since I cannot see them there, assume we used default learning rate (1e-3) and weight decay 0...

Trying to look at this a little bit myself