Oguzhan Gencoglu

Results 101 comments of Oguzhan Gencoglu

Also possibly with [mahotas](https://github.com/luispedro/mahotas), [smil](https://smil.cmm.minesparis.psl.eu/doc/p480.html), [pyvips](https://github.com/libvips/libvips/wiki/Speed-and-memory-use) which all claim to be really fast regarding these kind of operations.

I haven't tested that (onnx is a must for my case) but quantized onnx model has longer inference time than full precision one. So I ditched segformer all together. @Fazankabir

And what is the inference speed @abaoxomtieu ? In my use case, partial quantization gets slower than non-quantized model on CPU.

Hi-jacking this. Does line refinement mean e.g. double lines will be merged into one with the `'optimize': True` config or something else?

@shahules786 you need to take this seriously to avoid going down the langchain path. everybody hates langchain docs because it is broken and not up-to-date. Please do not prioritize new...

Same issue with Bedrock models, e.g. `"anthropic.claude-v2"`

So what is the conclusion? can we use any litellm supported model instead of OpenAI?