LibreChat icon indicating copy to clipboard operation
LibreChat copied to clipboard

Enhancement: Upload Documents as input Context vs RAG Workflow

Open achhabra2 opened this issue 1 year ago • 15 comments

What features would you like to see added?

It would be great if we could upload PDFs or text documents and have it processed as input context versus the current workflow which uses the RAG API instead. Some models now have 200K - 1M context window and we could utilize that outside of pasting large blocks of text into the chat window.

More details

We may need to have a secondary upload button implemented or something that signifies which type of workflow you are using. If you use chatGPT or Gemini web interface, etc. those documents get processed as context.

Below added from #3791


In the Claude UI, if you paste a large piece of text, it automatically gets attached and treated like a document. This is very easy to use as I can just paste large pieces of text from different sources, they get treated as separate documents, and then chat with them.

But LibreChat is more like ChatGPT, where any amount of pasted text gets added in the text box like a normal message. So having the above behavior I think is beneficial in many ways, at least as a toggle switch in the settings.

Sorry if this is duplicated; I couldn't find anything like this in the Issues. Loving LibreChat so far; really great alternative to paying for ChatGPT, Claude, and Gemini separately. Thanks!

Paste a large amount of text (threshold could be customizable maybe) and it will get uploaded as a TXT file instead of appearing in the chatbox.

Second, when clicking on such a file, a UI popup opens up where we can check the file.

Which components are impacted by your request?

No response

Pictures

image

This is what I'm referring to.

Code of Conduct

  • [X] I agree to follow this project's Code of Conduct

achhabra2 avatar May 16 '24 18:05 achhabra2

This request already exists, but your specifications are more clear so I will close the other in favor of yours. Thanks for the write up!

Closing https://github.com/danny-avila/LibreChat/issues/2335

danny-avila avatar May 16 '24 18:05 danny-avila

+1, this would be a major improvement, especially for use with gemini 1.5 models with their large context sizes. If it helps: supported mime types for each model can be found here https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/send-multimodal-prompts#media_requirements

raphaelgurtner avatar Jun 10 '24 07:06 raphaelgurtner

Google will get a lot of love soon due to their improved dev tools. May have something to do with @logankilpatrick joining the team 😊

danny-avila avatar Jun 10 '24 13:06 danny-avila

Hi, any update on this? I would like to send PDFs to the Gemini 1.5 pro model instead using the RAG API. Cheers.

wesselhuising avatar Jul 10 '24 07:07 wesselhuising

This would improve the value of LibreChat massively and we are looking forward to this. Is there any timeline you could share?

marcelamsler avatar Jul 15 '24 09:07 marcelamsler

upvote +1 🚀

schnaker85 avatar Jul 16 '24 06:07 schnaker85

FYI everyone, Librechat already supports this as it comes with a rag_use_full_context flag which puts the entire document into context. One just needs to control this via the .env or add a setting in the UI.

amir-ghasemi avatar Jul 16 '24 13:07 amir-ghasemi

FYI everyone, Librechat already supports this as it comes with a rag_use_full_context flag which puts the entire document into context. One just needs to control this via the .env or add a setting in the UI.

https://github.com/danny-avila/LibreChat/blob/main/api/app/clients/prompts/createContextHandlers.js#L25 I think it still requires to use the RAG_API_URL to be setup.

schnaker85 avatar Jul 16 '24 14:07 schnaker85

FYI everyone, Librechat already supports this as it comes with a rag_use_full_context flag which puts the entire document into context. One just needs to control this via the .env or add a setting in the UI.

https://github.com/danny-avila/LibreChat/blob/main/api/app/clients/prompts/createContextHandlers.js#L25 I think it still requires to use the RAG_API_URL to be setup.

Yes, indeed. We still need the RAG for question answering against knowledgebases consisting of 1000s of documents so that feature should not go away. This flag and the existing endpoint in rag api allows for including the full document in context with minimal changes.

amir-ghasemi avatar Jul 16 '24 17:07 amir-ghasemi

FYI everyone, Librechat already supports this as it comes with a rag_use_full_context flag which puts the entire document into context. One just needs to control this via the .env or add a setting in the UI.

A document included in the context in this way is however still subject to any preprocessing/text-extraction on LibreChat's part right? The idea of this feature request would be to circumvent that (if the user so desires) and have the model (endpoint) deal with the document as-is. This would allow many different use-cases / document types, not only pdf but even sound/video/csv etc. - basically whatever is supported by the model-endpoints.

Is that feature considered out of scope (since it's not in the roadmap currently at all) or just low priority? If it's just low priority, would help implementing it still be appreciated?

raphaelgurtner avatar Aug 26 '24 12:08 raphaelgurtner

A document included in the context in this way is however still subject to any preprocessing/text-extraction on LibreChat's part right?

No I think it would be nice to have a simple "use full text" option while uploading. If it's text-based, the browser can handle it and the server never interacts with the file, other than adding it to the AI request, it would just get appended to the user message.

danny-avila avatar Aug 26 '24 13:08 danny-avila

Is there an update here? We would love to use Librechat to compare Contracts in PDF Format with Gemini 1.5.

marcelamsler avatar Sep 10 '24 09:09 marcelamsler

+1

o42o avatar Sep 10 '24 19:09 o42o

+1

banjavi avatar Sep 13 '24 18:09 banjavi

Could this change incorporate the option to send images to RAG instead of having the model process it? Thinking of images with newspaper articles, document scans in image format. Maybe possible to do the OCR with RAG API and then include it in "use full text". I can put this in as a separate request. Like the option to choose to either use RAG or process content with model.

bsu3338 avatar Sep 15 '24 17:09 bsu3338

This would be really great! Some of our colleagues asked if they can summarize a doc/pdf. Which does not really working well with RAG workflow. This will be a really great use case for Gemini.

We may need to have a secondary upload button implemented or something that signifies which type of workflow you are using. If you use chatGPT or Gemini web interface, etc. those documents get processed as context.

I feel like secondary upload button (only for model that support it) is making more sense, so user can choose between RAG workflow or input context.

hksitorus avatar Oct 07 '24 03:10 hksitorus

This feature would be amazing and is so far the only crucial limitation I'm having using LibreChat.

matsfinsas avatar Oct 07 '24 08:10 matsfinsas

A native implementation for this is planned and I will work on it soon, in order to send text from files as part of the context.

danny-avila avatar Oct 22 '24 14:10 danny-avila

A native implementation for this is planned and I will work on it soon, in order to send text from files as part of the context.

the above PR for the closed issue(https://github.com/danny-avila/LibreChat/pull/4503) would allow to send the complete files as base64 encoded string in the requests for example for google models.

Is this what you mean or only text-files (eg. no PDF's)?

schnaker85 avatar Oct 22 '24 14:10 schnaker85

Why was the Pull Request closed? It contains a working implementation, which could be used as a base. We really need this feature, as the usage is so limited without the possibility to upload Files.

marcelamsler avatar Oct 31 '24 10:10 marcelamsler

Why was the Pull Request closed? It contains a working implementation, which could be used as a base. We really need this feature, as the usage is so limited without the possibility to upload Files.

the PR is open but it doesn't address this issue. The point is to pass text-based files into the prompt, not to pass it as base64

danny-avila avatar Oct 31 '24 14:10 danny-avila

Is there any way to completely skip the embedding process and just upload a PDF file straight to the LLM? I don't need RAG capabilities for my use case.

helgster77 avatar Nov 25 '24 16:11 helgster77

Is there any update about putting files in the prompt?

yixuantt avatar Dec 06 '24 05:12 yixuantt

Why was the Pull Request closed? It contains a working implementation, which could be used as a base. We really need this feature, as the usage is so limited without the possibility to upload Files.

the PR is open but it doesn't address this issue. The point is to pass text-based files into the prompt, not to pass it as base64

I think they were talking about this PR https://github.com/danny-avila/LibreChat/issues/4502 which is indeed closed. If that PR was extended to pass the content of text files directly into the prompt instead of attaching the files as base64 to the request would you consider reopening and possibly merging it?

raphaelgurtner avatar Dec 06 '24 07:12 raphaelgurtner

Is there any update about uploading a full PDF file straight to the LLM?

wibubunbo avatar Dec 10 '24 11:12 wibubunbo

Right now I'm working on a project where I have to deal with upload files in chat ( not rag ) as process them but with my custom endpoint not the librechat. Is there any feature that allows us to upload files which can be handled by our custom endpoint? Below are the two cases:

  1. The files can be uploaded not for RAG but rather as an input text such as document itself or document to text something like that.
  2. Integrate it with custom endpoint that will allow us to take in document files and we do the processing and stuff , this can be done with custom endpoint response config.

Is there any feature that already is taking care of this when it comes to custom endpoint with file, I really need this.

@danny-avila @schnaker85

daniyalsaif200 avatar Dec 16 '24 10:12 daniyalsaif200

@wibubunbo No but this is on my list to tackle soon. I've already started by differentiating uploads for Agents, and this will soon move to legacy endpoints, too.

Straight-to-provider will become an option in this dropdown, probably labeled as "Upload to Anthropic", or "Upload to Provider" (basically, replace Provider for the actual provider, and hopefully with an info button to help explain this). This is so that we can delegate file handling to providers that natively support it, (Google, Anthropic, OpenAI for images).

firefox_A5ieJGkpVz

@daniyalsaif200 No, I would only build something compatible with the main providers/endpoints supported by the project, i.e., Anthropic, OpenAI, Google.

As for parsing file as text, I also plan to add this to the dropdown as "Upload as Text" with compatible files that can be parsed as such.

danny-avila avatar Dec 16 '24 12:12 danny-avila

+1 this would be great also for use with agents

zeemy23 avatar Jan 09 '25 22:01 zeemy23

direct file upload to Gemini is hugely needed and would be a big improvement over RAG

maybe a solution is that not all endpoints get that new feature then but you can set an env variable for Google endpoint that you say GOOGLE_SEND_FILES_DIRECTLY=true or something

marlonka avatar Feb 14 '25 00:02 marlonka