jyC23333

Results 10 issues of jyC23333

### System Info I just set up a local tracing server, I change the port to 8005. ![image](https://github.com/hwchase17/langchain/assets/110331827/b06d5948-1676-46b6-a8e3-3c4e01cf35e8) when I visit localhost:4173, It shows: ![image](https://github.com/hwchase17/langchain/assets/110331827/abae4119-1c2d-4470-9ba2-1986f6762e65) and the error is: ```...

### Describe the issue I want to change the LLM into Qwen,and I write a model file according to llava_llama.py: ```python # Copyright 2023 Haotian Liu # # Licensed under...

### Question I find that the setting in ```pretrain.sh``` is mismatched with the pap As mentioned in paper, LLM should be freezed, but in the pretrain script, the LLM weights...

Are there some methods to visualize the attention of image encoder in clip?

你好,请问下如何进行SFT的代码会发布吗?

### Search before asking - [X] I have searched the Multimodal Maestro [issues](https://github.com/roboflow/multimodal-maestro/issues) and found no similar bug report. ### Bug Traceback (most recent call last): File "/data/megvii/projects/Qwen-VL/scripts/test_maestro.py", line 7,...

bug

Hi,阅读 miniCPM2.5 的技术报告,发现经历的训练阶段是比较多的,故有以下两点问题希望咨询下: 1. 2.6 与 2.5 都是按照技术报告中的三阶段进行训练的吗? 2. 是否可以提供训练中各阶段的 loss 曲线作为参考?

Hi, I can only find a subset of the image from the dataset in [huggingface](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data) And I don't find multi-image datas in the dataset. for example, tqa is a multi-image...