chenyuz3
Results
2
issues of
chenyuz3
I noticed that for no matter what models used for vualt QA embedding, the default context length/chunk size is 2048 tokens, which may lead to a reduction of retrieval performance...
question
Or simply let us decide the url for api (for openai compatible apis)?