felix-wang
felix-wang
Could you try this: ``` docker run -i -p 51009:51000 -v $HOME/.cache:/home/cas/.cache -v $PWD:/cas/ --gpus all jinaai/clip-server:master-onnx onnx-flow-custom.yml ```
Does the local address work?
Ok, I see. You want to build `CUDA` image. May I ask you to add the building CLI in our document instead of README?
@themantalope For your purpose, you can directly use the containerized executor from the hub `jinahub+docker://CLIPTorchEncoder` (https://cloud.jina.ai/executor/gzpbl8jh): ``` jtype: Flow executors: - uses: jinahub+docker://CLIPTorchEncoder ``` In the above example, the executor...
@jemmyshin What's the status of this ticket?
We need to have a decision here. is it still necessary to keep this demo page alive.
Maybe we can define a simple `pydantic` model as the basic data structure: ``` class Document(BaseModel): id: str = Field(..., required=True) embedding: Union[nd.array, list[float]] = Field(..., required=True) meta_data: Dict =...
This is not in our current roadmap. BTW, do you have this requirement in your project?
可能是inference的时候,dropout参数没有重置为0