Lakshantha Dissanayake

Results 13 issues of Lakshantha Dissanayake

Hello, I am developing a custom OpenWrt image but now want to include node-red with tensorflowjs. Please advise me on how this can be achieved. Also, I would appreciate it...

Hello, I am having a problem at getting limited FPS results running YOLOv5n on Jetson Xavier NX. The inference is run on the default video file provided in the deepstream...

Hardware: Jetson AGX Orin Developer Kit Software: JetPack 5.0.1 DP What works: Inference works well with FP32 Issue: Inference does not work with INT8. The following output log can be...

Hello, I am running JetPack 5.1.2 on an Orin NX 16GB device and following the page below to perform the minigpt4 benchmarks: https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/minigpt4 However, after the benchmarks is finished, I...

@glenn-jocher Have test run this branch on RPi and passes all https://github.com/ultralytics/ultralytics/actions/runs/8805938577/job/24169616939 Main changes: - Revert `flatbuffer` export because it works now: https://github.com/ultralytics/ultralytics/actions/runs/8803839325 - Separate jobs for Benchmarks and Tests...

TODO
enhancement
devops

@glenn-jocher This fix is needed for NVIDIA Jetson or else, the following error can be seen. ``` (yolov8) nvidia@nvidia-desktop:~/YOLOv8-CPU$ yolo export model=yolov8n.pt format=tflite Ultralytics YOLOv8.2.2 🚀 Python-3.8.10 torch-2.1.2 CPU (ARMv8...

enhancement
dependencies

I have seen in the following sections of the codes that the versions set for influxdb and grafana are quite old. https://github.com/Nilhcem/esp32-cloud-iot-core-k8s/blob/master/05-influxdb_grafana_k8s/k8s/influxdb.yaml#L20 https://github.com/Nilhcem/esp32-cloud-iot-core-k8s/blob/master/05-influxdb_grafana_k8s/k8s/grafana.yaml#L20 Is it possible to change these sections...

Hello, Please help to merge Seeed XIAO BLE Sense board (based on nRF52840) support with this library. Thank you.

@glenn-jocher Now all model exports are working with NVIDIA Jetson Orin. - Added benchmarks for all the model exports - Added `onnxruntime-gpu` install - Make minor changes to format

documentation

Hello, It seems that onnx can only be installed in version 1.6.0 for Jetson Nano. And when I run the following to convert the model into onnx: ```sh python export.py...