llm-guard
llm-guard copied to clipboard
API Deployment: Dynamic versus config based
From what I have seen the API can only consume requests that then will go through the scanners configured. Any way to make this dynamic? Take a config for scanners, and execute request?
From what I have seen in the code this is impossible. Would you be open to a Pull Request on this?
Hi @pedroallenrevez , can you clarify in more detail? If you are referring to dynamically inferring the models based on the parameters provided by queries, it could be an issue for potential performance defects. I have already given some thought to this issue. It could be somehow manageable on the local model option. But as you referred it could need a design change.
I'd definitely be interested in something like this. I am starting work on a project which will have multiple agents which could potentially benefit from different scanners or different thresholds on scanners.
And option to pass in the configuration to use, have multiple auth tokens which might be associated with different config files, or configure different routes which can be associated with different config files would be useful.