llamaindex-retrieval-api icon indicating copy to clipboard operation
llamaindex-retrieval-api copied to clipboard

Add HTTPException handling to query.py

Open Haste171 opened this issue 1 year ago • 1 comments

Add HTTPException handling for if the engine fails to the /query endpoint

Haste171 avatar Jan 08 '24 16:01 Haste171

Sweeping

40%

🔎 Searching

I'm searching for relevant snippets in your repository.

Code Snippets

[routers/loaders/pdf.py

1   | import tempfile, os
2   | from routers.utils.engine import Engine
3   | from llama_index import download_loader
4   | from fastapi import APIRouter, UploadFile, HTTPException
5   | 
6   | engine = Engine()
7   | router = APIRouter()
8   | 
9   | @router.post("/pdf")
10  | async def file(upload_file: UploadFile, namespace: str):
11  |     """
12  |     Loader: https://llamahub.ai/l/file-pymu_pdf
13  |     """
14  |     file_preview_name, file_extension = os.path.splitext(upload_file.filename)
15  |     if file_extension != '.pdf':
16  |         raise HTTPException(status_code=400, detail="File must be a PDF")
17  |     
18  |     with tempfile.NamedTemporaryFile(delete=True, prefix=file_preview_name + '_', suffix=".pdf") as temp_file:
19  |         content = await upload_file.read()
20  |         temp_file.write(content)
21  |         PyMuPDFReader = download_loader("PyMuPDFReader")
22  |         loader = PyMuPDFReader().load(file_path=temp_file.name, metadata=True)
23  |         engine.load(loader, namespace)
24  |     
25  |     return {'message': 'File uploaded successfully', 'filename': upload_file.filename, "namespace": namespace}
```, routers/loaders/ipynb.py

1 | import tempfile, os 2 | from routers.utils.engine import Engine 3 | from pathlib import Path 4 | from llama_index import download_loader 5 | from fastapi import APIRouter, UploadFile, HTTPException 6 | 7 | engine = Engine() 8 | router = APIRouter() 9 | 10 | @router.post("/ipynb") 11 | async def file(upload_file: UploadFile, namespace: str): 12 | """ 13 | Loader: https://llamahub.ai/l/file-ipynb 14 | """ 15 | file_preview_name, file_extension = os.path.splitext(upload_file.filename) 16 | if file_extension != '.ipynb': 17 | raise HTTPException(status_code=400, detail="File must be a IPynb") 18 |
19 | with tempfile.NamedTemporaryFile(delete=True, prefix=file_preview_name + '_', suffix=".ipynb") as temp_file: 20 | content = await upload_file.read() 21 | temp_file.write(content) 22 | IPYNBReader = download_loader("IPYNBReader") 23 | loader = IPYNBReader(concatenate=True).load_data(file=Path(temp_file.name)) 24 | engine.load(loader, namespace) 25 |
26 | return {'message': 'File uploaded successfully', 'filename': upload_file.filename, "namespace": namespace}

1 | # retrieval-api 2 | Retrieval API that utilizes llama-index 3 | 4 | # Installation 5 | pip install -r requirements.txt 6 | 7 | # Startup 8 | uvicorn main:app --reload 9 | 10 | # Access 11 | http://localhost:8000/docs 12 | 13 | # Loaders 14 | PDF DOCX IPYNB 15 | 16 | # TODO

16 | # TODO 17 | - Add API endpoint for chatting with content (chat history etc.) ref. https://gpt-index.readthedocs.io/en/stable/core_modules/query_modules/chat_engines/usage_pattern.html 18 | - Add more loaders 19 | - Create dynamic loader for files (one endpoint)

1 | # Byte-compiled / optimized / DLL files 2 | pycache/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/


</details>

Haste171 avatar Jan 08 '24 16:01 Haste171