Matteo
Matteo
@pierlj this can be closed, right?
Hi @arm-diaz, you should be able to use Bedrock without problems. Check [how to wrap your model on this page ](https://docs.giskard.ai/en/latest/open_source/scan/scan_llm/index.html#step-1-wrap-your-model). If you are using Bedrock with langchain, check the...
EDIT: I misunderstood the request. The question was about running the _LLM-assisted_ detector replacing GPT-4 with a Bedrock model.
> This was released in https://github.com/Giskard-AI/giskard/releases/tag/v2.10.0 ✅ Not yet
I haven't tested at all but something like this should work (for Claude 2): ```python class ClaudeBedrockClient(LLMClient): def __init__(self, bedrock_client): self._client = bedrock_client def complete( self, messages: Sequence[ChatMessage], temperature: float...
@Hartorn this seems to work great with vanilla `giskard`, but not when we use the optional group `llm`
Hi @ChatBear ! > Also, i didn't find any info related to the taxonomy, can you provider link for that ? I don't know which taxonomy is appropriate for categorial...
@pierlj looks good, can you add a test on the ragas metrics to make sure they are calculated correctly?
@bmalezieux you might need something different, but for reference the AVID integration could be helpful (it also generates JSON reports from the scan, following the [AVID schema](https://docs.avidml.org/database/framework)).
~will add some tests~ done!