promptfoo icon indicating copy to clipboard operation
promptfoo copied to clipboard

Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command...

Results 390 promptfoo issues
Sort by recently updated
recently updated
newest added

Context: https://discordapp.com/channels/1153767083381903389/1204780043436560394/1241137132450615376

`ProviderResponse` has an `output` field that is [documented](https://www.promptfoo.dev/docs/configuration/reference/#providerresponse) to be either a string or an object. If it's a string, then the Web UI shows a nice diff between the...

enhancement

Hello, No Amazon Bedrock models can be used as embedding providers for similarity assertion. I have tried all of them with these results: ``` sh Provider bedrock:cohere.embed-english-v3 is not a...

In the current implementation of the HTML report (view/shared), the instruction tokens and completion tokens are not differentiated. For better clarity and analysis, it is important to show instruction tokens...

# What This is a draft for adding support for threaded tests that can be used with e.g. OpenAI chats or assistants. I created it as a very early draft...

Hi, thanks for the great package. I am self hosting on Cloud Run and attempting to hook it up to the self-hosted Langfuse too. I am using the new bucket...

I am using a Python script as my prompt, and I was wondering how I could get the final LLM response from my provider and the assertion response. I am...

I'm trying to evaluate a prompt that uses function calls. However, the function schema depends on one of the variables in the test output. It looks like function calls currently...

This fix is a suggestion that addresses a perhaps personal inconvenience for me. If you have a better solution, I'd love to hear it and hope you'll consider this positively....