DeepHyperX icon indicating copy to clipboard operation
DeepHyperX copied to clipboard

Deep learning toolbox based on PyTorch for hyperspectral data classification.

Results 32 DeepHyperX issues
Sort by recently updated
recently updated
newest added

kwargs['learning_rate'] --> kwargs['lr']

- [x] Rewrite disjoint sampling method - [x] Move sklearn models into their own file - [x] Rewrite the Dataset - [x] Add parallelization option (see #32) - [x] Use...

Current approach to class balancing is to use inverse median frequency loss reweighting. Other options could be: ### Resampling Resample the dataset (e.g. upsample minority classes or downsamples majority classes)...

enhancement

`torchvision` defines [Transforms](https://pytorch.org/docs/stable/torchvision/transforms.html) objects to apply data augmentation and other transformations to data. We could and should define our own custom Transforms. Pros: - easier to reuse the data augmentation...

enhancement

Currently the dataset is [normalized into (0,1)](https://github.com/nshaud/DeepHyperX/blob/99792c5aec51a9602099b4b1c6618af01b652e09/datasets.py#L319). This is mostly fine but we should be able to use other normalizations or even no normalization at all. TODO: - [ ]...

enhancement

In this commit, common normalization methods are realized in the normalise_image function of utils.py and a normalization argument is added in the main.py.

_Maybe out of scope for this toolbox_. The toolbox currently work with the assumption that the models are supervised. Working with unsupervised models (e.g. autoencoders) could be helpful. - [...

enhancement

Currently, the torch DataLoader uses blocking data loading. Although loading is very fast (we store the NumPy arrays in-memory), transfer to GPU and data augmentation (which is done on CPU)...

enhancement

The current spatially disjoint train/test split divides the image in 2 for each class. However there might be spatial correlations between the pixels for those regions and this approach is...

enhancement