llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

support MiniCPM-V-2.6

Open tc-mb opened this issue 1 year ago • 1 comments

Dear llama.cpp Official,

Hi, I'm writing to address our new PR submission for integrating our model MiniCPM-V 2.6 into llama.cpp. MiniCPM-V 2.6 is the latest and most capable model in the MiniCPM-V series. This model is stronger and supports multi-images understanding and video understanding.

This version of the model supports video understanding, and I have implemented functions such as video frame extraction in my fork version. However, because ffmpeg is introduced, there may be many environment and compilation issues in other devices. Therefore, I think it can be divided into multiple PR submissions.

  1. This PR will first submit the modification of the model, and I hope it can be merged soon, so that the community can use MiniCPM-V 2.6 by GGUF first.
  2. And in the later PR, support for video formats will be submitted, and we can spend more time discussing how llama.cpp can better integrate the function implementation of video understanding.

Best regards, MiniCPM-V Official ^_^

tc-mb avatar Aug 10 '24 12:08 tc-mb

waiting for merge

x4080 avatar Aug 10 '24 20:08 x4080

waiting for merge

yorkane avatar Aug 12 '24 15:08 yorkane

waiting for merge

HaishengLiang avatar Aug 15 '24 07:08 HaishengLiang

waiting for merge

nanowell avatar Aug 15 '24 16:08 nanowell

I have opened an issue 9066 where I experienced a crash after this pull request was merged. The crash was unrelated to this miniCPM-V-2.6 model. I hope you can reproduce the error

saket424 avatar Aug 17 '24 20:08 saket424

I have opened an issue 9066 where I experienced a crash after this pull request was merged. The crash was unrelated to this miniCPM-V-2.6 model. I hope you can reproduce the error

Hello, I saw that the issue you mentioned was that llava would crash, but my update only involves the part of minicpmv. Although I am not sure about the issue problem, I feel that it may not be the problem with this branch. Can you test whether this branch will also crash before being merged? Of course, if it is indeed a problem introduced by this PR, I will be very happy to help modify it.

tc-mb avatar Aug 19 '24 05:08 tc-mb

@tc-mb The crash is not directly related to your miniCPM2.6 PR other than there is no crash before your PR and a crash after your PR owing to some uninitialized variables

Here is a PR that appears to fix the issue I reported https://github.com/ggerganov/llama.cpp/pull/9082

Sorry for the false alarm

saket424 avatar Aug 19 '24 13:08 saket424

@tc-mb The crash is not directly related to your miniCPM2.6 PR other than there is no crash before your PR and a crash after your PR owing to some uninitialized variables

Here is a PR that appears to fix the issue I reported #9082

Sorry for the false alarm

I'm glad your problem was solved.

tc-mb avatar Aug 19 '24 13:08 tc-mb

@tc-mb Can we use mini cpm with context cache ? So that we upload image once and ask for multiple question referring to the same image ?

x4080 avatar Aug 19 '24 20:08 x4080

@tc-mb Can we use mini cpm with context cache ? So that we upload image once and ask for multiple question referring to the same image ?

Yes, it's now storing cache.

You can run in interactive mode to ask multiple rounds of questions.

./llama-minicpmv-cli -m ../MiniCPM-V-2_6/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-V-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -i

or modify the minicpmv-cli function (which is more like an example) to achieve the functionality you want.

tc-mb avatar Aug 20 '24 03:08 tc-mb

Eagerly awaiting...

yizhangliu avatar Aug 20 '24 11:08 yizhangliu

@tc-mb Can we use mini cpm with context cache ? So that we upload image once and ask for multiple question referring to the same image ?

Yes, it's now storing cache.

You can run in interactive mode to ask multiple rounds of questions.

./llama-minicpmv-cli -m ../MiniCPM-V-2_6/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-V-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -i

or modify the minicpmv-cli function (which is more like an example) to achieve the functionality you want.

cool, thats a great feature, thanks @tc-mb

x4080 avatar Aug 20 '24 20:08 x4080

Very cool! Are GPU operations supported at this time?

dewarrn1 avatar Aug 23 '24 02:08 dewarrn1

Very cool! Are GPU operations supported at this time?

I have tested in Ubuntu + Nvidia(4090), it is available and speed looks good. You can use it in the following way.

make LLAMA_CUDA=1 And add appropriate ngl parameters, such as. ./llama-minicpmv-cli -m ../MiniCPM-V-2_6/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-V-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -p "What is in the image?" -ngl 100

tc-mb avatar Aug 23 '24 03:08 tc-mb

Awesome, thanks!

dewarrn1 avatar Aug 23 '24 05:08 dewarrn1

@tc-mb Can you give us the usage for how to serve up minicpm2.6 using llama-server so we can send it openai compatible chat completion requests with base64 encoded images

saket424 avatar Aug 25 '24 15:08 saket424

@tc-mb Can you give us the usage for how to serve up minicpm2.6 using llama-server so we can send it openai compatible chat completion requests with base64 encoded images

Sorry, I didn't test the server method when I updated it, I will support this capability in the near future.

tc-mb avatar Aug 28 '24 09:08 tc-mb

@tc-mb Could you please provide the templating info in README-minicpmv2.6.md? Like the llava-cli templating and llava-1.6 prompting section in README-minicpmv2.6.md. It is necessary for practical usage to know how to organize the user question and the image. And also, whether or not the image should be converted to bytes or base64? Thanks!

apepkuss avatar Nov 19 '24 05:11 apepkuss