cs-heibao
cs-heibao
@tanglang96 thanks for your summary, and I compared the two ways(dali and pytorch dataloader), the training time almost the same??? the code are following: 1) pytorch dataloader format: ``` CROP_SIZE=...
可以解释一下这个函数吗? ``` def rectlong2opencv(boxes): boxes_ = boxes.copy() boxes_[boxes[..., 4] < 0, 2] = boxes[boxes[..., 4] < 0, 3] boxes_[boxes[..., 4] < 0, 3] = boxes[boxes[..., 4] < 0, 2] boxes_[boxes[...,...
the segmentation results between the the website demo and the amg.py in project are different, why?
the gt bounding-box format cx, cy, w, h angle, all of the five values are the rotated images corresponding parameters? and the angle is definition is the follow: 
the code as following: ``` # calculate iou between truth and reference anchors anchor_ious = iou_mask(gt_boxes, norm_anch_00wha, xywha=True,mask_size=64, is_degree=True) # anchor_ious = iou_rle(gt_boxes, norm_anch_00wha, xywha=True, is_degree=True, img_size=img_size, normalized=True) best_n_all =...
@zengarden can not make successful, and the error is a little different compared with your given, could you give some help, thanks. 
@MacwinWin hi, I use different pictures do the test but get the same predictions, and I wonder whether the input I used is wrong, and the input image format as...
@gurkirt hi, I download parts of the test data aims to show the classification result with your trained inceptionV3 mode, but I found that the input for test is not...