FaceMaskDetection
FaceMaskDetection copied to clipboard
ValueError: operands could not be broadcast together with shapes (1,5456,1) (1,5972,1)
I used the below configuration in pytorch inference
Anchor configuration feature_map_sizes = [[33, 33], [17, 17], [9, 9], [5, 5], [3, 3]] anchor_sizes = [[0.04, 0.056], [0.08, 0.11], [0.16, 0.22], [0.32, 0.45], [0.64, 0.72]] anchor_ratios = [[1, 0.62, 0.42]] * 5
model = "face_mask_detection.pth"
anchors (shape) = 5972,4 predicted_bbox (shape) = 5456,4
input image size = 260 x260 What feature map value will suites for this execution. Any other option to execute the face_detector model in size 260x260
@tvishnu1990 please could you help me i want to train my own data .
Have you solved the problem you proposed about the shape mismatch in pytorch inference? I have the same problem