FuHTong

Results 9 comments of FuHTong

Have you solved it yet? What's causing it? I used YOLOv5s's model and reported an error using load_onnx setup outputs

> Note that Anti-UAV dataset is already released I am sorry that I may have tried to download too many times and the Baidu cloud connection failed. Sorry for the...

感谢您的答复,第一个错误发生在我运行加载视频时,所以我想更新mpp. `type:test.h264 mpp[7266]: mpp_info: mpp version: 41c9c809 author: sayon.chen 2020-12-24 [mpp2str]: Add rc_gop_mode 2 str mpp[7266]: mpp_rt: NOT found ion allocator mpp[7266]: mpp_rt: found drm allocator mpp[7266]: mpp_platform: can not...

你好,运行了已经安装的mpp_platform_test,运行报错: `mpp_log:can not found match soc name: rockchip,rk3588-evb4-lp4-v10 rockchip,rk3588’ `mpp_plat_test: chip name: rockchip,rk3588-evb4-lp4-v10 rockchip,rk3588’ `mpp_plat_test: ’ `mpp_plat_test: chip vcodec type 00000000’ `mpp_plat_test: ’ `mpp_plat_test: start probing decoder device name:’ `mpp_plat_test:...

I also have such doubts, but the open source code should use the method of self-distillation, now I also hope the author can answer, thank you very much.

@yuyu19970716 Thank you very much! At present, I have done some work on downstream tasks, and found that the pre-training part may have a great impact on downstream tasks, for...

@khpanb Thank you for your open source project, I have starrd, seriously learn

这里我也遇到了相应的问题,目前我理解的应该是vocab_size没有对齐造成的,目前模型给出的self.tokenizer.n_vocab的长度为151860 , 这个数字是qwen.tiktoken的长度151643 + 217个特殊字符的个数计算而来,而模型的配置文件中的长度为 "vocab_size": 151936 , 造成Qwen-VL经过lora微调后无法对齐,目前还缺少76个字符,但我不太清楚其余76个字符是什么,所以我只能自作主张的修改Qwen-VL文件夹下的**tokenization_qwen.py**的45行: `EXTRAS = tuple((f"" for i in range(281)))` 将extras由原来的205生成到281,填补了76个tokens,再继续训练,目前这个方案对我来说是可以完成模型合并的。

@drenched9 是不是可以这样配置你的数据,组成一种多轮对话的方式进行训练呢。 #[ { "from": "user", "value": "Picture 1: 1.jpeg\n描述一下这张图片" }, { "from": "assistant", "value": "这是一只猫" }, { "from": "user", "value": "这是猫在干什么?" }, { "from": "assistant", "value": "它正蹲在草地上" }, {...