Amit Kumar Mondal

Results 47 comments of Amit Kumar Mondal

I have now only one endpoint - ```python @router.post(path="/ask", name="Chat Endpoint", description="The main endpoint to ask a question to the foundation model", summary="Endpoint to ask question", tags=["chat", "ask", "chat"]) async...

I finally came up with the following endpoint: ```python @router.post(path="/ask", name="Chat Endpoint", description="The main endpoint to ask a question to the foundation model", summary="Endpoint to ask question", tags=["chat", "ask", "chat"])...

@ajndkr I tried the following: ## Scenario 1 (`run_mode="sync"`): ```java from typing import Annotated, Any from fastapi import Depends, Query, Body from kink import di from lanarky.adapters.langchain.callbacks import TokenStreamingCallbackHandler, SourceDocumentsStreamingCallbackHandler,...

@ajndkr Thanks a lot for your continuous assistance! Looking forward to your further analysis 👍

I tried two different versions of output key: ## 1. with `text` ```java from typing import Annotated from fastapi import Depends, Query, Body from kink import di from lanarky.adapters.langchain.callbacks import...

Also note that, if I run the above-mentioned codes in ASYNC mode (`ChainRunMode.ASYNC`), the chain gets finished successfully but the tokens ain't streamed as they get generated by the model.

@ajndkr I tried that too (the following code) ```java from typing import Annotated, Any from fastapi import Depends, Query, Body from kink import di from lanarky.adapters.langchain.callbacks import TokenStreamingCallbackHandler, get_token_data from...

Just an interesting observation: when I send the first HTTP request (using the aforementioned code using Lanarky) to Vertex AI (Chat Bison) to answer a question, the chain finishes successfully,...

I have also added `on_llm_error` and `on_retriever_error` to check if any error occurred. But, these ain't invoked at all. **Another interesting observation**: This happens only when multiple tokens are generated,...