张志鸿(zhang zhihong)
张志鸿(zhang zhihong)
Hello, thank you for your great work. I am reproducing your work on the **DTU** dataset **using MLP parameterized SDF.** Do I need to **use position encoding like NeRF when...
想问一下swift支持训练 lmms-lab/llava-onevision-qwen2-7b-ov 吗,我看swift官方文档里面支持训练的是llava-hf/llava-onevision-qwen2-7b-ov-hf, 这两个在huggingface上的模型有什么区别吗?期待您的解答! @Jintao-Huang
@yuhangzang @panzhang0212 我的环境是安装readme.txt里面安装的,版本都是一致的,权重我下载好到了本地目录:./internlm-xcomposer2d5-7b-reward下,下面是推理的代码: ``` import torch from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained( "./internlm-xcomposer2d5-7b-reward", device_map="cuda", torch_dtype=torch.float16, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("./internlm-xcomposer2d5-7b-reward", trust_remote_code=True) model.tokenizer = tokenizer chat_1 = [ {"role":...
### Required prerequisites - [x] I have read the documentation . - [x] I have searched the [Issue Tracker](https://github.com/PKU-Alignment/align-anything/issues) and [Discussions](https://github.com/PKU-Alignment/align-anything/discussions) that this hasn't already been reported. (+1 or comment...