Alexey

Results 266 comments of Alexey

@joaosbastos You can use any 100-500 RGB-images as representative dataset to calibrate the range of values for quantization. You can just try to use RGB-images from RedWeb-dataset: https://drive.google.com/file/d/12IjUC6eAiLBX67jW57YQMNRVqUGvTZkX/view Or better...

@joaosbastos What command did you use to compile? Try to compile with `-a` flag, like this `edgetpu_compiler -sa model.tflite` ---- There are 3 approaches/versions of EfficientNet: 1. **EfficientNet GPU/TPU**: Depth-Wise-Conv2d,...

`url, filename = ("https://github.com/lednevandrey04/hellow-world/raw/main/Gerl.jpg", "Gerl.jpg")`

@ljun901527 Hi, >It seems that this method doesn't need converting to onnx and PB first. Yes. >Can you please elaborate on how to realize it? This converter is private. We...

@ljun901527 Hi, * We are using `load_state_dict` for networks with batch-normalization and this does not lead to any issues. >And, if my model has not norm layer, this issue does...

@bigtree2020 OpenCV `img = cv2.imread(path)` loads an image with HWC-layout (height, width, channels), while Pytorch requires CHW-layout. So we have to do `np.transpose(image,(2,0,1))` for HWC->CHW transformation.

You can find it there: * https://github.com/intel-isl/MiDaS/releases/tag/v2_1 * https://pytorch.org/hub/intelisl_midas_v2/ * https://tfhub.dev/intel/midas/v2_1_small/1

1. Yolo v1/v2/v3 can take different width/height/ratio of images as training/validation/test input. 2. Fully connected layers doesn't make network invariant to aspect-ratio. Fully connected layers only increase receptive field of...

1. Very simply put, Yolo can take different width/height/ratio of images as input data. But the more width/height/ratio different (in training and testing datasets) - the worse it detect. To...