rebuff icon indicating copy to clipboard operation
rebuff copied to clipboard

LLM Prompt Injection Detector

Results 29 rebuff issues
Sort by recently updated
recently updated
newest added

It is possible to evade the model check (make it always output score 0.0) by appending a special suffix to a prompt. The prompt injections then are not detected by...

To support more online/local models and vector stores, it will be more convenient to pass into langchain components, so that we don't need to handle initiation of various models and...

in v0.1.1, `pinecone_environment` was added as an arg for rebuff, which was later passed to the `pinecone.init` function. When using the latest pinecone version, this breaks with teh following error:...

This PR adds Chroma DB support for Python SDK.

As encountered in https://github.com/protectai/rebuff/issues/68, it's possible for the language model to not return a numerical value. With the python sdk, this causes an exception, but with the js sdk, this...

okay-to-test

I realized that in #90, I left the `javascript-sdk` in a state where it doesn't compile. My reasoning for not including `javascript-sdk/src/api.ts` in that PR was because the server API...

okay-to-test

We should have API docs for Python and TypeScript SDKs (After #52), a getting started guide that walks users through setting up their infrastructure (After #10) , examples in how...

documentation
help wanted
good first issue

According to the script `detect_pi_vectorbase.py` in `python-sdk/rebuff/`, the below code snippet assumes higher similarity score implies prompt injection, whereas it seems it should be the opposite i.e. a lower similarity...

This PR updates the server code to match the changes made in https://github.com/protectai/rebuff/pull/90. This PR includes the commit(s) from that PR along with a commit from https://github.com/protectai/rebuff/pull/66 which was needed...

okay-to-test