David Nicholson
David Nicholson
I think how-to guide(s)? - how to estimate how much training data I need - how to test effects of different hyperparameters
rename 'unlabeled_label' -> 'unlabeled_class'; define as constant and use constant for default args
because it's confusing to talk about "labeled as unlabeled", and by analogy with the "background" class used in object detection - [ ] define a constant in `vak.constants` like BACKGROUND_CLASS_NAME...
Occasionally I get an error when running `vak predict` about a `bad CRC-32 for file 's.npy'` I have noticed that this happens with a dataset where I have already run...
esp for big datasets, frustrating to wait a long time while spectrograms are generated only to have `split` error out because the `train_dur` split and `val_dur` split summed are larger...
so that something like post-processing can be implied as a set of `torchvision.transforms.Compose`
that lets user specify which metric should be monitored on validation step, e.g. accuracy, segment error rate we want the best segment error rate possible but we only save maximum...
this only works by accident, because we're usually only training one model, but it will be a problem when trying to compare multiple models
haven't confirmed this yet but that's we just thought of it, writing down here @yardencsGitHub observed more crashes with smaller window size --> increases likelihood of opening the same file,...