Oguzhan Gencoglu
Oguzhan Gencoglu
Maybe also for openCV
Also possibly with [mahotas](https://github.com/luispedro/mahotas), [smil](https://smil.cmm.minesparis.psl.eu/doc/p480.html), [pyvips](https://github.com/libvips/libvips/wiki/Speed-and-memory-use) which all claim to be really fast regarding these kind of operations.
NOT_IMPLEMENTED : Could not find an implementation for ConvInteger(10) node with name 'Conv_0_quant'
@fazankabir No solution found so far.
NOT_IMPLEMENTED : Could not find an implementation for ConvInteger(10) node with name 'Conv_0_quant'
I haven't tested that (onnx is a must for my case) but quantized onnx model has longer inference time than full precision one. So I ditched segformer all together. @Fazankabir
NOT_IMPLEMENTED : Could not find an implementation for ConvInteger(10) node with name 'Conv_0_quant'
And what is the inference speed @abaoxomtieu ? In my use case, partial quantization gets slower than non-quantized model on CPU.
Hi-jacking this. Does line refinement mean e.g. double lines will be merged into one with the `'optimize': True` config or something else?
@shahules786 you need to take this seriously to avoid going down the langchain path. everybody hates langchain docs because it is broken and not up-to-date. Please do not prioritize new...
Same issue with Bedrock models, e.g. `"anthropic.claude-v2"`
So what is the conclusion? can we use any litellm supported model instead of OpenAI?