OpenAI icon indicating copy to clipboard operation
OpenAI copied to clipboard

Adding dedicated contentFilter function

Open benhynes opened this issue 2 years ago • 2 comments

It was required by OpenAI for a project I've been working on, so I just wanted to simplify usage a little bit. Now there's a contentFilter function that can be called as:

client.contentFilter(prompt: prompt, completion: { rating in
    switch rating {
    case 0:
        print("Safe")
    case 1:
        print("Sensitive")
        // This means that the text could be talking about a sensitive topic, something political, religious, or talking about a protected class such as race or nationality.
    case 2:
        print("Unsafe")
        // This means that the text contains profane language, prejudiced or hateful language, something that could be NSFW, or text that portrays certain groups/people in a harmful manner.
    default:
        print("Unexpected result")
    }
    }
})

It effectively wraps a completion request instead of implementing a different request, so I think the error handling should be sufficient with the completion of the the completions function with Result<[Completion], Swift.Error>. This is handled with the guard case .success(let completions) = result else { fatalError("\(result)") } within contentFilter, but I'm interested in input as to whether this is an acceptable way of dealing with this.

Just a note that I rebased my commits to modify a commit name for clarity, in case there were any questions regarding why there were 4 commits made at the same time.

benhynes avatar Nov 02 '21 18:11 benhynes

Hey @mattt! I've had some time to work on this throughout the day and I ended up creating a new file OpenAI > Supporting Types > Safety.swift to include the Safety enum plus all of the relevant extensions. I've also updated the documentation to match the changes.

I think .safe == 0, .sensitive == 1, .unsafe == 2, .failure == 3 makes sense. This way, having the values be comparable we could say something like if rating >= .sensitive { return } and that would include .failure instead of assigning it to -1 and having to deal with another case.

I also took a look at the test unit, but I'm having a bit of a time figuring out what a content filter request would look like in json to add to the MockOpenAIURLProtocol. Anecdotal testing is indicating that it's working as expected though.

benhynes avatar Nov 03 '21 01:11 benhynes

Hey @mattt, this all looks great. Just had a moment to review the changes you've made and it all looks good to me. I had thought about using Result<[Completion], Swift.Error> but wasn't 100% sure on how to go about implementing it properly, so getting to review your implementation has been educational for me. Thanks!

Seems the docs may still be in the process of being completed for content filter--there's some Python sample code provided there now on how to manage logprobs. Great catch on the updated docs and I'm happy to give this a shot at some point.

benhynes avatar Nov 16 '21 19:11 benhynes