How can i use moderation api from open ai with vercel ai sdk, there was nothing about this in vercel ai docs
Feature Description
I am asking users to enter some text and submit it and I am using vercel ai sdk to generate response from chatgpt to that text, but I also want to check if the text contains any profanity or not. How can I achieve this?
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const checkProfanity = async (text: string) => {
try {
// **what to do about this..->-> open.moderation.create**
const response = await openai.moderation.create({
input: text,
});
const { results } = response.data;
return results[0].flagged;
} catch (error) {
console.error("Error checking profanity:", error);
return false;
}
};
const isProfane = await checkProfanity(text);
if (isProfane) {
throw new Error("The text contains inappropriate content. Please modify your text and try again.");
}
const aiResponse = await generateText({
model: openai('gpt-3.5-turbo'),
prompt: `my prompt comes here....
Is there any way of achieving this functionality?
Use Case
No response
Additional context
No response
OpenAI moderations are currently not supported by the AI SDK.
Would be awesome to have it implemented
+1 on moderations being added
+1
+1
+1
🤜
+1
+1
Still would love to see this to be implemented!
Duplicates #2521