Peter Vennerstrom
Peter Vennerstrom
Batch size is samples_per_gpu * # of gpus. Samples_per_gpu is split into labeled and unlabeled by the sample_ratio. From the paper: > Since the amount of training data of Partially...
Tested the Weighter hook. `dict(type="Weighter", steps=[170000, 172500, 175000, 177500], vals=[4, 3, 2, 1, 0], name="unsup_weight")` It worked as expected. A potential issue is the learning rate step down doesn't account...
Tried a 256*256 image and it ran. To train on our own data with a new image size would we write a new class Custom_Model(BaseModel) in models.py which corresponds to...
Thanks for the clarification. Trained a model on the Kaggle Humpback Whale Identification data using (512 x 256) images. https://imgur.com/a/nSEaGxa Great work!
It may be the case that some of the images in the test set are fairly large. Mask2Former resizes the predicted masks back to input image scale on GPU. This...
Sounds like a fun project! A quick workaround would be to serialize the annotations on a per-image basis and load them by index just like images in the pipeline. Then...
Apologize if I made this sound easier than it ended up. On evaluation, unless there is a need to evaluate a test set too large to fit into memory it...
The annotations could be saved as per image files containing a single image's annotations and read in during training. The custom_dataset class could be adjusted not to pre-load the annotations...
I'd guess most of the dependancies are in terms of the images not the annotations. The existing code stores image/annotations in per image dictionaries within the list assigned to `self.data_infos`....
`LoadMultiChannelImageFromFiles` will load both image files and stack them. This will likely be compatible with some pipeline functions, but not others. There might be some functions like resize that are...