langchain icon indicating copy to clipboard operation
langchain copied to clipboard

[Feature Request] Easy webserver with FastAPI

Open tiagoefreitas opened this issue 2 years ago • 7 comments
trafficstars

A built-in webserver that serves prompts as APIs with FastAPI.

lambdaprompt is similar to langchain prompt templates and implements this easily:

https://github.com/approximatelabs/lambdaprompt

I don't have any experience so can't do a PR now, but if no one else does I will try.

tiagoefreitas avatar Jan 26 '23 01:01 tiagoefreitas

I know its easy to create individual apis for each prompt, but something automated would be good, that also saves queries and results to a database.

Then we could build something like https://promptable.ai on top, in a standard way

tiagoefreitas avatar Jan 26 '23 01:01 tiagoefreitas

I have been playing with langchain and have experience building FastAPI apps, interested in taking this on.

aschwa avatar Jan 31 '23 15:01 aschwa

I built a cool generic api to call any prompt with an arbitrary number of parameters. I still need to do the same for calling the llm.

I have 0 experience with fastapi and haven't programmed for a few years but ChatGPT + Google do wonders.


from fastapi import FastAPI, HTTPException, Request
from <api.prompts> import <Add Prompt templates here>

app = FastAPI()

@app.get("/generatePrompt")
async def generatePrompt(request: Request):
    prompt_name = request.query_params.get("prompt")
    if not prompt_name:
        raise HTTPException(status_code=400, detail="Missing 'prompt' query parameter.")

    prompt_module = globals().get(prompt_name)

    prompt_template_class = getattr(prompt_module, prompt_name)

    if prompt_template_class is None:
        raise HTTPException(status_code=404, detail="Prompt template " + prompt_name + " not found.")
    
    params=dict(request.query_params);
    input_variables = [k for k in params.keys() if k != "prompt"]

    prompt_template = prompt_template_class(**{"input_variables": list(input_variables)});
    
    prompt = prompt_template.format(**params);
   
    return {"prompt": prompt}

This will instantiate the necessary template with the input variables from query parameters. The "prompt" parameter is the name of the template class that must be in a file/module with the same name. I'm sure this can be improved as it was built for my use case.

I can work on a PR as I would like to be the author, but I never did a PR, any pointers on how to package this for langchain would be helpful.

tiagoefreitas avatar Jan 31 '23 17:01 tiagoefreitas

I also want to add a way to get the llm to make up a template if one doesn't exist, to make the frontend always work even if the backend doesn't exist (I saw this idea somewhere on twitter, and its awesome, will credit later).

tiagoefreitas avatar Jan 31 '23 17:01 tiagoefreitas

Here is another version that creates a different API per prompt, but not as useful for me.


@app.get("/generatePrompt/{prompt_name}")
async def generatePrompt(prompt_name: str, request: Request):
    prompt_module = globals().get(prompt_name)
    prompt_template_class = getattr(prompt_module, prompt_name)

    if prompt_template_class is None:
        raise HTTPException(status_code=404, detail="Prompt template " + prompt_name + " not found.")
    
    params=dict(request.query_params);
    input_variables=params.keys();
    prompt_template = prompt_template_class(**{"input_variables": list(input_variables)});
    
    prompt = prompt_template.format(**params);
   
    return {"prompt": prompt}

tiagoefreitas avatar Jan 31 '23 17:01 tiagoefreitas

We're spinning up a FastAPI LLM server over at the following repo if that helps you: https://github.com/assistancechat/assistance.chat/blob/main/src/python/assistance/_api/main.py

I initially wanted to base it on LangChain but for now have opted to directly create the chains ourselves.

SimonBiggs avatar Feb 01 '23 10:02 SimonBiggs

I want to develop frontend for langchain. If anyone ineterested or has any lead please let me know ASAP.

hemangjoshi37a avatar Feb 22 '23 11:02 hemangjoshi37a

Not sure if still relevant but I've found this repo in the docs which seems applicable?

lucapericlp avatar Apr 01 '23 09:04 lucapericlp

Hi, @tiagoefreitas! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you were requesting a built-in webserver that serves prompts as APIs with FastAPI. You mentioned a similar project called lambdaprompt and expressed your willingness to contribute if no one else does. There has been some discussion on this issue, with aschwa expressing interest in taking on the project. You also shared a code snippet for generating prompts with FastAPI. Additionally, there were suggestions for a FastAPI LLM server repository and interest in developing the frontend for LangChain.

Before we proceed, we would like to know if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project!

dosubot[bot] avatar Sep 20 '23 16:09 dosubot[bot]