Dual-Edge-TPU-Adapter
Dual-Edge-TPU-Adapter copied to clipboard
Test results for B+M key adapter
All 5 adapters I had with B+M key for Coral Dual TPU card are sent and as of today, even 2.5 adapters are delivered.
Please take your time to test and please report:
- motherboard and adapter (if mated via PCIe-m.2 adapter)
- OS
- Fits? Works?
- If using 2x single core (non-pipeline, for example camera1->TPU1, camera2->TPU2) - is there decrease in performance compared to single core only (ie only camera1 -> TPU1)
- Models (or project) used and inference time
- Photo, optional, but appreciated
- What thermal solution are you using?
- Comments, suggestions for production adapters
Will sure do, still not received mine.
Just received the adapter and I can report it is working fine, and both Edge TPU cores are available :tada:
- Motherboards: MSI X99S SLI PLUS (7885) / Shuttle SG45H7 (FG45) Adapter: Cablecc Low Profile PCI-E 3.0 x4 Lane to M.2 NGFF M-Key SSD Nvme AHCI PCI Express Adapter Card
- OS: Debian 10 amd64
- If using 2x single core (non-pipeline, for example camera1->TPU1, camera2->TPU2) - is there decrease in performance compared to single core only (ie only camera1 -> TPU1): TBD
- Models (or project) used and inference time: ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite (Frigate)
- Thermal solution: none yet
- Comments, suggestions for production adapters: provide the screw kit to attach the Edge TPU to the adapter.
Coral PCI devices seen on my host:
# lspci -nn | grep Coral
04:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]
05:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]
The asmedia ASM1182e chip on the back of the adapter:
# lspci -nn | grep 1182
02:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1182]
03:03.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1182]
03:07.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1182]
The PCI tree:
# lspci -tvv -nn | grep -B 2 Coral
\-[0000:00]-+-00.0 Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D DMI2 [8086:6f00]
+-01.0-[01]--
+-01.1-[02-05]----00.0-[03-05]--+-03.0-[04]----00.0 Global Unichip Corp. Coral Edge TPU [1ac1:089a]
| \-07.0-[05]----00.0 Global Unichip Corp. Coral Edge TPU [1ac1:089a]
In my Home Assistant Qemu VM:
00:0c.0 System peripheral: Device 1ac1:089a
00:0d.0 System peripheral: Device 1ac1:089a
apex driver is properly loaded for both cores:
root@hassio:~# ls -l /dev/apex_*
crw-rw---- 1 root root 120, 0 Jul 23 18:29 /dev/apex_0
crw-rw---- 1 root root 120, 1 Jul 23 18:29 /dev/apex_1
In Frigate docker container:
root@hassio:~# docker exec -ti addon_ccab4aaf_frigate sh -c "ls -l /dev/apex*"
crw-rw---- 1 root root 120, 0 Jul 23 18:29 /dev/apex_0
crw-rw---- 1 root root 120, 1 Jul 23 18:29 /dev/apex_1
And now pictures:
this is very promising... Eagerly waiting for the adapter to be available for purchase.
At first glance, inference speed seems a bit slower ~higher~ with the new adapter and one or two cores working than with a single core and my old m.2 E key to PCIe x1 adapter.
Sorry, meant slower, not higher (those graphs show inference time in ms, not speed).
Sorry, meant slower, not higher (those graphs show inference time in ms, not speed).
To me it reads as 7.5ms for single core and 8.5ms per core on Dual configuration. I don’t really get spikes on chart, if that’s overall for system, thermal throttling or Chapitô circus performing in front of your house, increasing number of objects to be detected :) Inference time increase is expected due to single PCIe lane sharing, but definitely it’s not a bottleneck
Hello, my report :-)
motherboard and adapter (if mated via PCIe-m.2 adapter)
Fujitsu D3641-S Industrial - socketed in the native M.2 NVME slot as well with one of the cheapest NVME -> PCIe adapter in an open pciex1 slot (https://www.aliexpress.com/item/33053044621.html)
OS
Proxmox 7 (Debian 11 with Ubuntu Hirsute Kernel)
Fits? Works?
No problem. Fits well on both constellations. I stick to the pcie version for performance testing. Got perfect pci lane isolation on both TPU cores
If using 2x single core (non-pipeline, for example camera1->TPU1, camera2->TPU2) - is there decrease in performance compared to single core only (ie only camera1 -> TPU1)
i have to create some benchmark calculations for a more precise information. But from what it feels like it is only minor performance loss, which didnt impact any of my old use cases (aka everything is still working at the same speed as before)
Models (or project) used and inference time
Coming soon
Photo, optional, but appreciated
Coming soon
What thermal solution are you using?
Aluminium heatspreader with a way too big thermal pad attached with rubber bands - not the best choice
Comments, suggestions for production adapters
For the production adapter i would suggest adding threaded inserts in the mounting holes.
lspci:
05:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1182]
06:03.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1182]
06:07.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1182]
07:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]
08:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]
I haven't had a chance to run code on it yet but confirmed it is recognized in Windows...
Installed into this adapter: M.2 NVME to PCIe 3.0 x4 Adapter with Aluminum Heatsink Solution
I did not install the large heat sink because it didn't look like it would effectively make contact with the TPU cores. I have a bag of tiny stick on heat sinks somewhere that I'll dig out if I run into thermal issues. Its installed in the lowest PCIe slot in a tower case with plenty of cooling so hopefully it'll be OK without.
MB: ASUS ROG MAXIMUS VIII RANGER LGA 1151 Intel Z170 CPU: i7-6700k OS: Windows 10 Pro
More to come...
@pdecat
can you send me your Qemu configs to pass thru? Im not successful at VM level using qemu.
motherboard and adapter -> MSI B350M MORTAR (MS-7A37), Version 1.0 using native m2 ssd port OS - > Unraid + ubuntu server 18.04 LTS
This adapter is not working on PCI passthrough to Qemu VM. For what its worth, a regular adapter works fine in passthrough but only one TPU core shows.
On Unraid 6.9 the Coral Shows up, drivers are loaded, the following represent screenshots of Unraid
Now when I try to pass the PCI to the VM ( Qemu ) the delegate fails to load. ( again this works with a regular adapter )
here is a working example, on a single TPU, regular adapter ( one tpu disabled )
Just want to say I would love to preorder 2 of these whenever you're ready!!! Brilliant work, exciting progress.
Just want to say I would love to preorder 2 of these whenever you're ready!!! Brilliant work, exciting progress.
Thanks, @spcano01 First boards available for order will be PCIe x1 (desktop): /issues/4
@pdecat can you send me your Qemu configs to pass thru? Im not successful at VM level using qemu.
Hi @mitchross, here is the XML fragment I'm using with my QEMU/KVM setup:
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</source>
<alias name='hostdev2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</source>
<alias name='hostdev3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
</hostdev>
@pdecat Thanks! I am ablet to get mine working now.
On unraid/qemu VM's you must pass thru like this
Frankenstein configuration was reported to work: https://github.com/magic-blue-smoke/Dual-Edge-TPU-Adapter/issues/2#issuecomment-904779681
Off-the-shelf PCIe x16 to 4x m.2 M-key NVMe adapter (PCIe x16 to 4x PCIe x4 bifurcation support is required) potentially allows to have four Dual Edge TPU cards with four m.2 BM adapters to have 8 TPUs on one PCIe x16 slot.
Why Frankenstein? PCIe x16 bifurcated to 4x PCIe x4 slots, where only one out of four lanes on each slot is used to attach m.2 BM adapter and one Dual Edge TPU card
you got me. i bought one 😄
you got me. i bought one smile
@ozett wow, not so fast and I was trying to be as cautious as possible describing those results :see_no_evil: That configuration was tested with only one my adapter m.2 BM adapter and one Dual Edge TPU It's still under the question if power supply can provide enough current and test should be done with four adapters and four Dual TPU cards. Plus, I don't have a solid date for m.2 BM adapter release at the moment.
Finally stole some time to continue experimenting with the dual TPU card on your adapter on a M.2 to PCIe adapter (referenced above). After reinstalling python and running through the getting started instructions, I wasn't able to get the dual TPU card working in Windows 10. I suspect this has 0% to do with your adapter as Windows showed both Coral TPUs in the device manager. I gave up and opened an issue (https://github.com/google-coral/edgetpu/issues/484).
After opening the issue, I setup my desktop to dual boot Ubuntu (whatever current version is as of today), followed the getting started instructions again and was able to run the parrot example (2.8ms inference after first run) confirming the combo works. For my next trick, I'm going to feed it 100+ hours of ip camera footage to see who's dog is leaving steaming piles in the yard. Hopefully, I can find a way to parallelize that between the two TPUs for a true test.
Installed into this adapter: M.2 NVME to PCIe 3.0 x4 Adapter with Aluminum Heatsink Solution
Which adapter is this (the green one)?
@rafajrcarvalho This is early prototype of my m.2 B+M to m.2 E-key adapter for Coral Dual Edge TPU If interested, you can apply for a waiting list here: https://mbs-shop.online/waitinglist After pre-orders from waiting list are fulfilled, it will be in stock here: https://mbs-shop.online/
@rafajrcarvalho This is early prototype of my m.2 B+M to m.2 E-key adapter for Coral Dual Edge TPU... it will be in stock here: https://mbs-shop.online/
May I ask as to why the model here...
https://mbs-shop.online/tproduct/365624859-421008600521-dual-edge-tpu-adapter-m2-2280-bm-key
... seems somewhat larger than the prototype images show in this issue? I'd love to put one of these in tiny N4200 system I have, but don't think the one shown on the shop link will fit[^1] :disappointed:
[^1]: Guess a case mod wouldn't be difficult, but still.
May I ask as to why the model here...
https://mbs-shop.online/tproduct/365624859-421008600521-dual-edge-tpu-adapter-m2-2280-bm-key
... seems somewhat larger than the prototype images show in this issue? I'd love to put one of these in tiny N4200 system I have, but don't think the one shown on the shop link will fit1 😞
@raintonr prototypes and production boards are the same m.2 2280 size, however prototypes had PCIe switch mounted on a bottom side of the board. This is why prototype looks more compact and can be actually dremeled off to 2242 size. To simply assembly, decision was made to have components on one side, same time most of m.2 BM slots are 2280, ie no need to shrink adapter to smaller size. Please let me know if you’d prefer another form factor, at some point it will be worth making those
most of m.2 BM slots are 2280
I'd like to use a TPU on a mini PC. The motherboard is like this (case is square and fits the board exactly):
So looks like this will accept a 2242 only. I wouldn't be that big a deal to put this in another enclosure... and TBH the Dual Edge TPU should probably have a heatsink fitted so that would also necessitate a different enclosure but that said, always the smaller the better :smile:
FWIW, I can't find the spec sheet for this board now so can't confirm that slot has PCIe signals - doh! It was described as M.2 NGFF so I imagine it will be OK, but would be nice to confirm.
@magic-blue-smoke any thoughts on making a B Key 2242 slot adapter? They are common for Intel NUC's.
@magic-blue-smoke any thoughts on making a B Key 2242 slot adapter? They are common for Intel NUC's.
@bryanyork could you please tell for NUC motherboard m.2 slot:
- bottom clearance
- top clearance
- clearance beyond mounting screw?
@magic-blue-smoke The CAD drawings are available here: https://www.intel.com/content/www/us/en/support/articles/000006820/intel-nuc.html Specifically for my board: STEP: https://www.intel.com/content/dam/support/us/en/images/intel-nuc/NUC13ANB_NUC13LCB_STP.zip EASM: https://www.intel.com/content/dam/support/us/en/images/intel-nuc/NUC13ANB_NUC13LCB_EASM.zip
Let me know if this doesn't work and I can use a digital caliper to do my best.