Phi-3CookBook
Phi-3CookBook copied to clipboard
Phi-3-Vision Batch Inference Prompt format
This issue is for a: (mark with an x
)
- [ ] bug report -> please search issues before submitting
- [x] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
I can see that this feature request ticket has been marked as completed. Does this mean Phi-3-vision support batch inference now? If yes, can you please provide documentation. I was not able to find instructions/docs on how to do batch inferece with Phi-3-vision specially how the prompt format should be. I have tried doing that by replicating the single image prompt format but the Processor() doesn't work with a list of prompts.
Expected/desired behavior
Can do batch inferece: [ {img1, prompt1}, {img2, prompt2}, ...] output: [{response1}, {response2}, ...]