pppppM
pppppM
@fanqiNO1
convert_xtuner_weights_to_llava.py and convert_xtuner_weights_to_hf.py can support llava-next model?
The script convert_xtuner_weights_to_llava.py is only compatible with the main branch. The refactor_llava branch is still under development, and frequent changes may occur.
@awzhgw `convert_xtuner_weights_to_llava.py` 这个脚本,只支持 llm 是 llama 结构的模型 我们马上会更新 phi3 的转换脚本,需要把 phi3 先转为 llama 才行
File Changed 里有些问题,有 dataset 相关的改动
最好添加一些注释,展示一下输入 ckpt dir 的格式
ref_model 要不直接用 llm 的 config 重新 build ? loss 为 nan 可能要 @xiaohangguo 帮忙看下公式细节
I apologize for the inconvenience, but I was unable to reproduce the issue. If the #554 is unresolved, it should not reach the `_fuse_fx` step. Perhaps you have made some...
`xtuner chat` is a simple command-line tool developed for analyzing training results. If you want to chat with multi images, you can take advantage of inference tools such as `ollama`...
@zodiacg Yes, we are developing this feature in other PRs; No longer need cumbersome model conversion, and can directly connect `xtuner-llava` to the inference backend.
为 110B 专门添加一个 readme