M. Yusuf Sarıgöz

Results 97 comments of M. Yusuf Sarıgöz

This is ready for review now. Working usage is in monatis/clip.cpp#30 As a future work, I'll implement batch inference for text encoding in clip.cpp, which will require some extra work...

fixed the conflicts after syncing with llama.cpp yesterday.

@okpatil4u I updated the benchmark utility to make use of the proposed batch inference in monatis/clip.cpp#30 and got a ~1.4x speedup in per-image inference time compared to the main branch...

@okpatil4u currently the signature of the batch encoding function is: `bool clip_image_batch_encode(const clip_ctx *ctx, int n_threads, const std::vector &imgs, float *vec)`, and the batch size is essentially `imgs.size()`, i.e., it...

@ggerganov conflicts fixed and tests passed. Working usage is in the `benchmark` binary: https://github.com/monatis/clip.cpp/blob/8c09113e154b8f3a589a47d5780a19e4546c227a/tests/benchmark.cpp#L122-L126

Are we talking about some kind of a contributing guide, a guide for potential contributors, here? If so I think I can draft one. - `ggml_tensor` in short. - Memory...

Hi, I'm the contributor of the original LLaVA support in GGML/GGUF, and this model seems to be pretty amazing. Would like to get on this, but I couldn't find enough...

@vikhyat Awesome, thanks! Will have a look at it this week and keep you updated.

Hi @tiangolo, any plan to merge this pr? Or should we add it to our fork?

I forked @cbhagl's `refactor-frontend` branch and then pulled @tiangolo's master and pushed it into my [fork](https://github.com/monatis/full-stack-fastapi-postgresql). Anyone who wants to start with Vuetify 2 can use this one until this...