François Ponchon
François Ponchon
> I have the same bug. data caching eats up all memory on 64Gb disk. Cant store training checkpoints. > Tried setting DATASET_MOUNT_BLOCK_BASED_CACHE_ENABLED: true, but error arises, cant set boolean...
Another concern we have is that we can not set other parameters like theses two :  It would allow us to grab less data than required because when we...
Hi ! Why did you rollback secure encryption in db ? Thanks,
> I was having issues with encryption/decryption at the time of testing credentials manually from the UI > > I also thought not necessary cause the other Azure storage (key...
Great to see it ! Just for my understanding. I saw that you replace blob urls by something like : ´´´python "azure_spi://container_name/blob_name" ´´´ It is the way you handle it...
In our case, we use azure and blob, it’s not dfs. Then the url is something like that : * https://.blob.core.windows.net// How do you handle that case with azure_spi ?...
Ok good ! I’ll try to test it because in my original implementation i didn’t do the same way. Thanks !
A workarround i've find : ```python def iris2_training(segmentation_item:dict): transform = A.Compose([ A.LongestMaxSize(max_size=MAX_SIZE), A.PadIfNeeded( min_height=MIN_IMG_HEIGHT, min_width=MIN_IMG_WIDTH, border_mode=cv2.BORDER_CONSTANT, value=0, always_apply=True), A.HorizontalFlip(), A.RandomCrop(MIN_IMG_HEIGHT,MIN_IMG_WIDTH), A.ToFloat(max_value=255), ToTensorV2() ], bbox_params=A.BboxParams(format='pascal_voc',label_fields=['labels','ids'],min_visibility=BBOX_MIN_VISIBILITY), is_check_shapes=False ) output = transform( image=segmentation_item['image'],...
Just to keep you aware.... If you are doing instance segmentation, you should be carefull with using original bboxes.  Now, i just apply augmentation to masks and then recompute...