candle
candle copied to clipboard
Will candle support YOLOV5?
The current project is based on YOLOV5,Pytorch. We are required to use rust/wasm to do the same thing(Reuse existing Pytorch model). Now It's already supported YOLOV8,so Will candle support YOLOV5? and when?
Yolo-v5 should be easy to port but I'm not sure we want to prioritize it as yolo-v9 is more performant and like to have a MIT licensed implementation soon, I would rather see this one being added. That said, you're certainly welcome to write a crate for a yolo-v5 port with candle, happy to answer any questions to help here or on the discord channel.
Thank you for your reply! At the moment my team doesn't have that capability---"to write a crate for a yolo-v5 port with candle" , Now we have two choices:
- train the model by Pythorch, then conver it to the format of safetensors,So we can use the example candle project to compile ,at last,get the wasm we need。Is this option feasible?
- directly use candle to train YOLOV8 by our own dataset, but we did not find the training code in candle project。Can you provide some example training code?Thank you so much!
@chenkuokuo 有进展了吗老哥
你可以使用 yolov8 的官方训练程序训练模型,随后使用如下脚本将模型从 pt 转换为 safetensors,这样就可以使用 Candle 的 yolo-v8 示例,不过我暂且不知道要怎么对模型做量化后保存到 safetensors,这导致我只能使用 FP32 运行模型,如果你们有什么办法做量化的话我会很感激的
import sys
from os import path
import torch
from safetensors.torch import save_file
def rename(name: str):
name = name.replace("model.0.", "net.b1.0.")
name = name.replace("model.1.", "net.b1.1.")
name = name.replace("model.2.m.", "net.b2.0.bottleneck.")
name = name.replace("model.2.", "net.b2.0.")
name = name.replace("model.3.", "net.b2.1.")
name = name.replace("model.3.", "net.b2.1.")
name = name.replace("model.4.m.", "net.b2.2.bottleneck.")
name = name.replace("model.4.", "net.b2.2.")
name = name.replace("model.5.", "net.b3.0.")
name = name.replace("model.6.m.", "net.b3.1.bottleneck.")
name = name.replace("model.6.", "net.b3.1.")
name = name.replace("model.7.", "net.b4.0.")
name = name.replace("model.8.m.", "net.b4.1.bottleneck.")
name = name.replace("model.8.", "net.b4.1.")
name = name.replace("model.9.", "net.b5.0.")
name = name.replace("model.12.m.", "fpn.n1.bottleneck.")
name = name.replace("model.12.", "fpn.n1.")
name = name.replace("model.15.m.", "fpn.n2.bottleneck.")
name = name.replace("model.15.", "fpn.n2.")
name = name.replace("model.16.", "fpn.n3.")
name = name.replace("model.18.m.", "fpn.n4.bottleneck.")
name = name.replace("model.18.", "fpn.n4.")
name = name.replace("model.19.", "fpn.n5.")
name = name.replace("model.21.m.", "fpn.n6.bottleneck.")
name = name.replace("model.21.", "fpn.n6.")
name = name.replace("model.22.", "head.")
return name
model_path = sys.argv[1]
data = torch.load(model_path)
print(data)
tensors = data['model'].state_dict().items()
tensors = dict(tensors)
tensors = {rename(k): t for k, t in tensors.items()}
print(data["model"])
save_file(tensors, path.splitext(model_path)[0] + '.safetensors')
for k, v in tensors.items():
print(str(k), v.shape)
官方似乎没有segment的example
官方似乎没有segment的example
已经放弃candle转ort了,同样的yolov8,cpu,fp32,onnxruntime跑一张图片只要40ms,candle需要300ms(且都是不计前/后处理时间)
官方似乎没有segment的example
已经放弃candle转ort了,同样的yolov8,cpu,fp32,onnxruntime跑一张图片只要40ms,candle需要300ms(且都是不计前/后处理时间)
我试了usls,det很快,seg也很慢
官方似乎没有segment的example
已经放弃candle转ort了,同样的yolov8,cpu,fp32,onnxruntime跑一张图片只要40ms,candle需要300ms(且都是不计前/后处理时间)
我试了usls,det很快,seg也很慢
我是usls作者,目前yolov8的segment后处理使用了image crate,他的resize比较耗时,此外,在YOLO struct的实现中,为了整合所有的YOLOv8任务,后处理代码有点冗余,后面有时间我会优化resize速度问题。
@lujing-jlu @Lixuhuilll usls更新了yolo后处理,检测任务只需要1ms左右;分割任务测试官方bus.jpg,640*640分辨率,在我机器上之前要120ms,现在只需要20ms左右,主要时间占用和分割目标数量强相关。每个目标物从p3尺度resize到640大约需要900us,bus.jpg中含有六个目标物。此外还有find_contours也是占时间的大户,大约1.3ms一个mask。如果你们有更好地提升策略,欢迎交流!求star⭐️
@lujing-jlu @Lixuhuilll
usls更新了yolo后处理,检测任务只需要1ms左右;分割任务测试官方bus.jpg,640*640分辨率,在我机器上之前要120ms,现在只需要20ms左右,主要时间占用和分割目标数量强相关。每个目标物从p3尺度resize到640大约需要900us,bus.jpg中含有六个目标物。此外还有find_contours也是占时间的大户,大约1.3ms一个mask。如果你们有更好地提升策略,欢迎交流!求star⭐️
fast_image_resize应该已经是最优解了,或者turbo-jpeg也可以尝试一下