Llama-2-Open-Source-LLM-CPU-Inference
Llama-2-Open-Source-LLM-CPU-Inference copied to clipboard
config customization
Hi! Thanks - awesome job.
I have a question - why changing config (bigger chunks, vector counts) lead to broken output? For example: VECTOR_COUNT: 3 CHUNK_SIZE: 600 CHUNK_OVERLAP: 50 Gives me non logic output