Alexey

Results 266 comments of Alexey

I fixed it

It will be calculated on CPU, so it is very slow. Use about 100 - 1000 images.

@USMANHEART @chiaqf Hi, I added `pthreadVC2.dll` to the `/bin/` directory. Update your code from GitHub.

@dreistheman There is no CmakeList.txt in this repostiory. Just download the latest yolo2_light and build `yolo_gpu.sln` then run `/bin/yolo_gpu.exe`

There is a simple explanation: http://machinethink.net/blog/object-detection-with-yolo/ And here’s the calculation performed by the batch normalization to the output of that convolution: ``` gamma * (out[j] - mean) bn[j] = ----------------------...

It seems yes, jetson tx2 doesn't support INT8-quantization (DP4A). This is strange, because jetson tx2 is Pascal architecture and compute capability (CC) = 6.2 that is higher than 6.0: https://en.wikipedia.org/wiki/CUDA#GPUs_supported...

> nvidia-smi don't support jetson tx2? Any desktop GPU supports nvidia-smi. > Is there any way to make jetson tx2 support for fp16? jetson tx2 supports fp16, but it doesn't...

oh yeah `nvidia-smi` doesn't work on tegra (jetson tx2) so I think it doesn't support DP4A (INT8). You can only try to use XNOR (1-bit) quantization by training these models:...

@Yinling-123 TX2 doesn't support INT8 optimizations.

@trustin77 Hi, > Is this correct? Yes. ---- > What makes this difference in the quantized version? Range of float32bit-values is more than range of int8bit-values, so some of values...