alceausu
alceausu
I have the same issue, no mather the integration (Local AI or Generic OpenAI). The vllm server replies with: ``ERROR` serving_chat.py:60] Error in applying chat template from request: Conversation roles...
It seems these are two different issues, one related to connectivity, and the other one on format. Related to the request format, the anything-llm can reach vllm, but vllm throws...
I had the same error. To fix it, I added "ssm:GetParametersByPath" to the policy. Possible explanation: In the parameter store I had to fully qualify the parameter name, something like...