Kevin Hu
Kevin Hu
Please use `General` chunking method to parse it.
What kind of embedding inference service you apply to?
Not supported yet. We're gona figure it out.
It retrieved a lot of content while answering. So, may be caused by context length. Apart from it, it may lose connection to Ollama because of slow computations.
You need to `COPY docker/service_conf.yaml.template ./conf/service_conf.yaml.template` in dockerfile.
The GPU is not necessary and has little thing to do with accelerating RAGFlow.
ES was down.
The `Categorize` is used to control the computation flow. A `Generate` could achieve your goal to output the category of users' input.
Not supported yet.
Multiple file uploads will accelerates the parsing procedure if you start multiple task executors. For chunking methods like `Book` and `Law`, spliting a complete text file into pieces will lose...