frigate
frigate copied to clipboard
FEAT: Support Rockchip TPU
This is a super early WIP of potentially supporting the rockchip TPU. It may turn out that it can't be supported.
To-Do:
- [ ] Find a way to get x86_64 version of rktoolkit_lite working
- [ ] Interpret
rknn.inference
results and parse out for frigate - [ ] Determine which rockchip-py dependencies are needed to run and remove others
- [ ] Need to wget and run npu transfer proxy on aarch64 devices
@NickM-27 which Rockchip parts are you hoping to target with this? Looks like Rockchip has split newer products into a v2 of the toolchain. Good news is the rknn-toolkit-lite2 package supports python 3.9. Bad news is I don't think the older parts are supported and need the original library.
@NickM-27 which Rockchip parts are you hoping to target with this? Looks like Rockchip has split newer products into a v2 of the toolchain. Good news is the rknn-toolkit-lite2 package supports python 3.9. Bad news is I don't think the older parts are supported and need the original library.
Things are looking dicey. I've only found an rknn-toolkit-lite2 3.9 for arm and was told by the owners that there are no plans for an amd64 3.9 variant.
I've put this down for now (not in scope for 0.12) and planning to revisit later and see what has changed.
Either way it'll have to be the lite version and we'll need a separate process which uses builds / converts the model outside of frigates build process
My understanding was the lite package was just for running inference locally on the part. The amd64 packackage would prepare the model and could connect to an RKNPU as an accelerator. So yes, it would be hard to use in the second scenario, as an add-on. But I think the various RK3588 SBCs coming out can run locally with the lite toolchain. I have a Rock 5b on order I plan on playing with in a few weeks.
My understanding was the lite package was just for running inference locally on the part. The amd64 packackage would prepare the model and could connect to an RKNPU as an accelerator. So yes, it would be hard to use in the second scenario, as an add-on. But I think the various RK3588 SBCs coming out can run locally with the lite toolchain. I have a Rock 5b on order I plan on playing with in a few weeks.
The rockchip USB using the lite toolkit can run inferences and load a model, it just can't convert a model (which we can do beforehand anyway)
Either way right now it's looking like amd64 won't support it at all unfortunately. They also don't have any documentation on how to get the Tensor outputs for what the model returns and that's what I was stuck on when I decided to put it down
Deploy Preview for frigate-docs canceled.
Name | Link |
---|---|
Latest commit | 9543731f1e028b56c6420925f69822f74c7d797b |
Latest deploy log | https://app.netlify.com/sites/frigate-docs/deploys/63764a01e9bbbc0008d3293e |
I am way of out my depth here, but saw the todo comment in the commit. From the examples looks like self.rknn.inference(inputs=[tensor_input])
returns the outputs as a dictionary which then can be parsed for the desired results. This is corroborated with the sdk user guide. If the intention of the comment is to figure out what the values mean, look like you can convert your tensorflow lite model into a rockchip compatible model with rknn.load_tflite(model='model.tflite)
. So you should get the same output as you get with the coral tpu model. Hopefully this is helpful if you haven't found this info yet. Thanks for the work!
Given the discussion around community supported boards framework it seems others may pick this up and regardless I am not looking to personally support this as I won't use it myself