Kevin Hu

Results 1484 comments of Kevin Hu

Please use `General` chunking method to parse it.

What kind of embedding inference service you apply to?

It retrieved a lot of content while answering. So, may be caused by context length. Apart from it, it may lose connection to Ollama because of slow computations.

The GPU is not necessary and has little thing to do with accelerating RAGFlow.

The `Categorize` is used to control the computation flow. A `Generate` could achieve your goal to output the category of users' input.

Multiple file uploads will accelerates the parsing procedure if you start multiple task executors. For chunking methods like `Book` and `Law`, spliting a complete text file into pieces will lose...