hahmad2008
hahmad2008
Thanks @massquantity So after converting the datafame user id into these mapped ones, I can guarantee that the same user id value is the same when I do prediction using...
Thanks, @massquantity So for mapping the original data ids with the LibRecommender model, I need to consider only the ids in the train_data, is that right? If so how to...
@rkooo567 I didn't try that. as openllm has issue with upgrading vllm
@rkooo567 I didn't get what you mean by OSS vllm?
@rkooo567, I think understand [this](https://github.com/vllm-project/vllm/issues/3561#issue-2201316954) can help me proceed. Thanks @rkooo567
@rkooo567 I am trying to add chat completion at my fastapi service, so i used the entrypoint of openai ``` @app.post("/v1/chat/completions") async def create_chat_completion(request: ChatCompletionRequest, raw_request: Request): generator = await...
@rkooo567 could you please check this as well?
@youkaichao could you please check this?
@rkooo567 I am adding these files from [openai entrypoint](https://github.com/vllm-project/vllm/tree/main/vllm/entrypoints/openai) in the same directory of `ray_serv.py` `ray_serv.py` ``` from ray import serve from fastapi import FastAPI, Request from fastapi.responses import StreamingResponse...
@rkooo567 I used the same way to add the `generate_stream`, just copy it from vllm entrypoint and it worked fine. but here I believe that the problem is what is...