langserve icon indicating copy to clipboard operation
langserve copied to clipboard

LangServe 🦜️🏓

Results 144 langserve issues
Sort by recently updated
recently updated
newest added
trafficstars

Hello, I built a simple langchain app using `ConversationalRetrievalChain` and `langserve`. It is working great for its `invoke` API. However when it comes to `stream` API, it returns entire answer...

- [x] Add examples - [ ] Update TOC in README.md for new examples

![image](https://github.com/langchain-ai/langserve/assets/3205522/92e692ba-791b-4d57-b8ff-5f98fee5da07)

playground

A lot of folks want to use LangServe to deploy local LLMs (https://github.com/langchain-ai/langserve/discussions/410) We have an example that shows how to use ollama: https://github.com/langchain-ai/langserve/blob/main/examples/local_llm/server.py But this is only good for...

help wanted

## Issue Description: The on_event decorator used in [langserve/server.py](https://github.com/langchain-ai/langserve/blob/90bf613711c873e640bfa3224d013532744288a4/langserve/server.py#L46) is deprecated. As of the current FastAPI version, lifespan event handlers should be used instead of on_event. This change is in...

Hi! I found that include_callback_events doesn't work in stream / stream_log. I need to check token counts when I call stream / stream_log. Is it intended? Many thanks :)

enhancement

I just find there are validation errors when I use AgentExecutor as the runnable. I propose to add the the validation model for AgentStart and AgentFinish. ``` diff --git a/langserve/validation.py...

Im having an issue with langchain receiving more than one request at once (send one request, send another before getting a response). My setup: Im using VLLM as inference engine...

There's this warning when I run the code ``` LangChainDeprecationWarning: The class `langchain_community.chat_models.openai.ChatOpenAI` was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class...

Updated 2023-11-16: - [x] Examples to chat history persisted on backend - [ ] Add more ingestion options for files - [ ] Potentially add storage for runnable configuration options...