clip.cpp
clip.cpp copied to clipboard
Support image-only
Use image only for scanning a image and finding its classes
zsl (zero-shot labeling) examples does that. We need to pass candidate class names with --text
argument and zsl is scoring those classes.
I do not have a general list of candidates to provide
If you don't have candidates, CLIP model won't work for image classification. Then I guess your best bet would be to hardcode class names from a common dataset such as OpenImages by modifying zsl example.
p.s.: I'm planning to experiment with a transfer learning method on the edge to train a single head layer on top of the CLIP backbone for image classification, but I don't know when yet.
Can you go into more detail?
common dataset such as OpenImages
Like go to OpenImages and generate a text prompt with all its classes in to the command line prompt?
Yes, or only the classes that you are actually expecting to appear in the image. This is how zero-shot labeling is supposed to work.
If you could describe your exact use case, I'd try to make a more detailed comment.
Here’s an example. I want to create an discord bot that watches for a reaction on the image and posts a message explaining what the image looks like for people who have trouble seeing, but know what the words mean.
sounds like you want to feed the image embedding to a llm. (like llava)
Oh got it. This is called image captioning. There exist numerous models for it, but state-of-the-art results come from models like LLaVA. It is basically CLIP + LLaMA bridged with a linear layer. I have an idea like creating another project to combine clip.cpp with llama.cpp to achieve efficient inference of LLaVA, but this might be delayed for a week or so because there are some other features I'd like to implement in this repo before that.