Ciprian Mindru

Results 41 comments of Ciprian Mindru

@opti-mix @jfix71 I still need to provide some performance numbers to prove there is an optimization. To be noted that we are interested mainly in microcontrollers so the performance might...

@jfix71 This is intended to be a kernel debugging feature. For example what if for a custom backend (e.g. an accelerator) we have a buggy convolution which although correctly computes...

@vuzelac-cadence The two flags `quantizeFilter` and `quantizeBias` are used when the filter and the bias inputs of the convolution are constants with float precision and the intention is to quantize...

@vuzelac-cadence There shouldn't be any problem with the TFLite importer for per-channel quantized models. You can test the TFLite importer with this model: [mobilenetv1_pcq.zip](https://github.com/pytorch/glow/files/7049031/mobilenetv1_pcq.zip) Please check and have this issue...

@psyhtest Most of the quantized implementations for the FullyConnected I have seen do have an int8 output precision and not int32. The actual problem with the current FullyConnected implementation in...

@psyhtest There is no need for any graph transformation pass since a backend can choose not to lower a FC into a MatMul + BatchedAdd. The only thing missing is...

@842974287 For generality, I think the best approach here would be to fuse the RescaleQuantized in all the nodes with the exception of those nodes which have constraints regarding the...