Cui Junbo

Results 71 comments of Cui Junbo

Thanks a lot for the feedback, we also found a difference between the llamacpp and int4 versions. We are trying to find the problem. @naifmeh

> Hi naifmeh, to fix the above code you can do this on the file `minicpmv.cpp` which is located under `examples/minicpmv` > > There what you can do is change...

@naifmeh now we have solved this problem, please try it, looking forward to your feedback!

MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) for more detail. and here is our model in gguf format. https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf @duanshuaimin @leeaction @tyzero @hexf00

> 可否出一個 F16的 gguf格式,Q4的量化 沒有吃滿顯存。 > > 我們嘗試按照readme轉換一了下,發現效果也不是很好,還請指正。 > > [readme](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) > > ![image](https://private-user-images.githubusercontent.com/6467295/333478300-20c131d0-2776-464a-b7b6-52dc1a784cec.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTY1NDEyODEsIm5iZiI6MTcxNjU0MDk4MSwicGF0aCI6Ii82NDY3Mjk1LzMzMzQ3ODMwMC0yMGMxMzFkMC0yNzc2LTQ2NGEtYjdiNi01MmRjMWE3ODRjZWMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDUyNCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA1MjRUMDg1NjIxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NjQzNDk2MzFkNjBlMzI5MjJmNDdjOTkwNDM3YTZlM2Q3NTM1YzYwM2VhYjVjN2RkYzdkY2NmNDRjZmFjYjM1OCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.Q88gnd63UadfSMWvI7YBt3sXr9h-QQVklA-VqMWppjg) > > ![image](https://private-user-images.githubusercontent.com/6467295/333477459-3d4e94f3-79e1-4ceb-863d-79b883a0b479.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTY1NDEyODEsIm5iZiI6MTcxNjU0MDk4MSwicGF0aCI6Ii82NDY3Mjk1LzMzMzQ3NzQ1OS0zZDRlOTRmMy03OWUxLTRjZWItODYzZC03OWI4ODNhMGI0NzkucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI0MDUyNCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNDA1MjRUMDg1NjIxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZmIxZDVlMmYyMzA1ZWQ3ZDFkNjZjNTU0Mzk5NjQzYWQzMzk4OGJiZDFmZjg0ODNkZGFhNjI1ZDQ1YWMzNDc4NiZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QmYWN0b3JfaWQ9MCZrZXlfaWQ9MCZyZXBvX2lkPTAifQ.RZ1ade9TL_gc2Qwph-ZPjOeiCDqy9QZl_mypOPqz-LU) It looks like you're using ollama instead of the direct llama.cpp? The current...

@seasoncool https://github.com/OpenBMB/llama.cpp Please use our fork, the offical llama.cpp has not merge our PR

https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5 @yuanjie-ai

We notice some reported issues from compromising MiniCPM-Llama3-V 2.5's adaptive visual encoding with Ollama & Llama.cpp's vanilla fixed encoding implementation. We are reimplementing this part for Ollama & Llama.cpp to...

This issue does not provide a reproducible context and requires more information to help resolve it. If you still need assistance, please provide your code environment and running code to...