WuXiaoxue
WuXiaoxue
在预处理阶段过长的文本是如何处理的呢,是直接截断了吗。目前想要做一个阅读理解的任务,但是需要针对自己的语料库进行增量预训练,语料库文档长度大多超过500,这是需要将文档分成子句再去做预处理比较好吗。
### Describe the issue Issue: When loading weights for llava-v1.6-34b, it says model parameter mismatch. Command: ``` model_path = "liuhaotian/llava-v1.6-34b" with warnings.catch_warnings(): warnings.simplefilter("ignore") # Pytorch non-meta copying warning fills out...
I've successfully run the` inference.py `program for captioning. And the results are good (almost the same as the example). ``` { "video1.mp4": "A red car is parked in a showroom...