powerful-benchmarker
powerful-benchmarker copied to clipboard
Whem MClassPerSampler is equal to 1, Error occurs
assert all(len(v[sub_key])==len_of_list for sub_key in v.keys()) # assert all lists are the same length AssertionError
--experiment_name fgvc2
--splits_to_eval [val]
--dataset~OVERRIDE~ {aircraft: {root: ./content/datasets/AircraftSorted}}
--save_interval 1
--patience 15
--split_manager~OVERRIDE~ {ClosedSetSplitManager: {test_size: 0.2,test_start_idx: 0.5,num_training_partitions: 5,num_training_sets: 1,hierarchy_level: 0,data_and_label_getter_keys: null}}
--trainer~OVERRIDE~ {TrainWithClassifier: {iterations_per_epoch: 250,dataloader_num_workers: 2,batch_size: 32,freeze_trunk_batchnorm: true,label_hierarchy_level: 0,loss_weights: null,set_min_label_to_zero: true}}
--loss_funcs~OVERRIDE~ {metric_loss: {TripletMarginLoss: {margin: 0.1}}, classifier_loss: {CrossEntropyLoss: {}}}
--optimizers~OVERRIDE~ {trunk_optimizer: {RMSprop: {lr: 0.000001,weight_decay: 0.0001,momentum: 0.9}}, embedder_optimizer: {RMSprop: {lr: 0.000001,weight_decay: 0.0001,momentum: 0.9}},classifier_optimizer: {RMSprop: {lr: 0.000001,weight_decay: 0.0001,momentum: 0.9}}}
--models {trunk: {bninception: {pretrained: imagenet}},embedder: {MLP: {layer_sizes: [512]}}, classifier: {MLP: {layer_sizes: []}}}
--check_untrained_accuracy False
--hook_container~APPLY~2 {primary_metric: classaccuracy}
--num_epochs_train 150
--sampler~OVERRIDE~ {MPerClassSampler: {m: 1}}
My config file is like above. When i take m is equal to 1 , above error occurs. I check size of list of triplets_above_threshold is equal to 247 when others ('threshold' and 'loss') are 250
For now, here's how you can fix this.
- Create this custom reducer in run.py. (The code below assumes you're using pytorch-metric-learning v0.9.88)
from pytorch_metric_learning.reducers import ThresholdReducer
class CustomAvgNonZeroReducer(ThresholdReducer):
def __init__(self, **kwargs):
super().__init__(threshold=0, **kwargs)
self.add_to_recordable_attributes(name="triplets_above_threshold", is_stat=True)
- Register it in run.py
r = runner(**(args.__dict__))
r.register('reducer', CustomAvgNonZeroReducer)
r.run()
- Use it in your loss function
--loss_funcs~OVERRIDE~ {metric_loss: {TripletMarginLoss: {margin: 0.1, reducer: {CustomAvgNonZeroReducer: {}}}}, classifier_loss: {CrossEntropyLoss: {}}}