Michael
Michael
Adds a new **AI & LLM Testing** section to the Software category. As LLM-powered applications become more prevalent, testing frameworks specific to AI are increasingly important. This PR adds: -...
## Summary Added [promptfoo](https://github.com/promptfoo/promptfoo) to the LLM observability tools section. promptfoo provides evaluation, tracing, and red teaming for LLM applications. It captures traces across LLM calls, monitors for prompt injection...
Adds [promptfoo](https://github.com/promptfoo/promptfoo) to the LLM Evaluation section. **promptfoo** is an open-source LLM testing and evaluation framework: - **Evaluation**: Compare prompts, models, and RAG systems with customizable assertions - **Red teaming**:...