Vitaly Baldeev
Vitaly Baldeev
@xadupre code mentioned in this [ussue](https://github.com/microsoft/onnxruntime/issues/17795) to reproduce this bug ```python from sklearn.feature_extraction.text import CountVectorizer from skl2onnx import convert_sklearn from skl2onnx.common.data_types import StringTensorType from sklearn.pipeline import Pipeline from sklearn.linear_model import...
@xadupre Hi! Any updates on this problem ? As I understand the fast fix would be to process every string independently. RIght ?
My tests show that performance on InferenceSession run is twice slower without vectorization - 400 rps instead of 800 rps on my server.
> If you are using a loop, it is not really suprising. There is no parallelization even though each row is processed independently. May you suggest some impovement in this...
Also I want to notice that I don't user stop_words parameter. How exactly set stop_words parameter to empty or None while converting to ONNX model ?
@xadupre What do you mean by custom kernel exactly ?
Hello everyone! As I understand official aio client for python is not developed yet. So, which programming language and library should I use to make asynchronous requests to clickhouse ?