Anindyadeep
Anindyadeep
Thanks @arnavsinghvi11 for the review :)
Marking this PR as stale, closing this for now. Apologies for making this wait too long
@penguine-ip , @jaywyawhare wants to work on this issue :)
> Sounds great, please add it in this module: https://github.com/confident-ai/deepeval/tree/main/deepeval/check. The entry point should be the check function in check.py, let me know if you have any questions! > >...
@penguine-ip hey, can you assign all the above Metric Request issues to me. Thanks
Awesome, thanks @penguine-ip :)
Do you want to an LLM to judge another LLM here? Like can you more elaborate the issue. May be I need to go through the paper to understand this...
Yes, that could be.
> The Triton server docker images only have the backends installed: > > ``` > $ docker run -it --gpus all --rm nvcr.io/nvidia/tritonserver:24.03-trtllm-python-py3 bash > > ============================= > == Triton...
I see, and I understand that the reason of doing this is to allocate less space. However is it possible to have a image specific to trt llm python?