Erman Okman

Results 19 comments of Erman Okman
trafficstars

Hello, The approach you proposed works well for model training until Quantization Aware Training starts. As the layer weights will be quantized in that mode, the 1s in your model...

`self.quantize` is required if you want to evaluate your quantized model, which you can have it using the `quantize.py` utility in the [synthesis repo](https://github.com/analogdevicesinc/ai8x-synthesis). The model checkpoint you obtained after...

Just to be safe. In that mode, the quantization function rounds the numbers as in the hardware so it is guaranteed to have a matched output with the hardware even...

They should be exactly same, but if you run the evaluation on GPU, it sometimes produces slightly different outputs. These minor might cause rounding errors at some layers and these...

Yes, that is the expected output at the hardware...

Thanks for reporting this issue. In this case, the network fails not due to the convolution at the initial layer but the maxpool operation at the second layer. For the...

Hello, The hardware does not support BatchNorm (BN) layer so if you want to use a BN layer in your model, you have to fuse these layers to the preceding...

It is a python script that you can run from the command line. The details of the batchnormfuser is [here](https://github.com/analogdevicesinc/ai8x-training?tab=readme-ov-file#command-line-arguments-1). But we always suggest to use QAT as the direct...

You need a [MAX78000 EvKit](https://www.analog.com/en/resources/evaluation-hardware-and-software/evaluation-boards-kits/max78000evkit.html) to measure and observe the energy consumption. The feather board does not have an energy measurement circuitry.

There is no other straightforward approach to measure the energy consumption.