brilthor

Results 25 comments of brilthor

seeing similar behaviour where the kobo shows one list of books; but the sync table entries in the calibre-web db don't match

@manoj7410 HP DL360 Gen9 ```Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 48 On-line CPU(s) list: 0-47 Thread(s) per...

@manoj7410 ``` # modinfo apex filename: /lib/modules/5.4.0-88-generic/updates/dkms/apex.ko author: John Joseph license: GPL v2 version: 1.2 description: Google Apex driver srcversion: 37A768932BDAF006DA92150 alias: pci:v00001AC1d0000089Asv*sd*bc*sc*i* depends: gasket retpoline: Y name: apex vermagic:...

Any other information / debug steps that would be helpful? @manoj7410 @hjonnala

@manoj7410 inference times did not improve: ``` $ lsmod | grep apex apex 28672 0 gasket 110592 1 apex $ cat /etc/udev/rules.d/65-apex.rules SUBSYSTEM=="apex", MODE="0660", GROUP="apex" $ ls -al /dev/apex_0 crw-rw----...

Is there a debug build for the kenel modules or similar that could help debug this? Is the coral hardware faulty and needs to be returned? @manoj7410 @hjonnala

Added and run, output is below: Expand for the run output ``` ~/tmp/pycoral$ python3 examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input test_data/inat_bird_labels.txt --input test_data/parrot.jpg I tflite/edgetpu_manager_direct.cc:453] No matching device is already...

Added and run, output is below: (pci:0 had the same performance, pci:1 had a traceback) Expand for the outputs ### `pci:0` ``` ~/tmp/pycoral$ python3 examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input...

``` python3 examples/scratch.py [{'path': '/dev/apex_0', 'type': 'pci'}] ``` ``` ~/tmp/pycoral$ cat examples/scratch.py import pprint from pycoral.pybind._pywrap_coral import ListEdgeTpus as list_edge_tpus pprint.pprint(list_edge_tpus()) ```

I could live with 1 tpu if the inference speed was as expected, any idea for where to start digging to root out the high latency?