truss-examples
truss-examples copied to clipboard
I need webhook calling feature
This example code is for running SDXL from this page: https://www.baseten.co/library/stable-diffusion-xl/
const axios = require('axios');
const os = require('os');
const BASETEN_API_KEY = process.env.BASETEN_API_KEY;
const modelId = ''; // Replace with your model id
const data = {
prompt: 'a little boy looking through a large magical portal, the boy sees a futuristic human civilization in that portal, extremely detailed, trending on artstation, 8k',
};
// Call model endpoint
axios.post(`https://model-${modelId}.api.baseten.co/production/predict`, data, {
headers: {
Authorization: `Api-Key ${BASETEN_API_KEY}`,
},
})
.then((response) => {
const output = response.data.data;
// Process the output as needed
console.log('Output:', output);
})
.catch((error) => {
console.error('Error:', error.message || error);
});
I found that it's an async request that only works for long running servers. I am using Vercel and it's a serverless function that has short timeout span so I need a way to pass a webhook route to the request somehow.
I need baseten to notify my webhook route when the prediction finishes because I cannot keep polling the data on short-lived functions.
This is an example of the webhook feature from runpod: https://docs.runpod.io/serverless/endpoints/send-requests#--webhook
When can I get this feature and if not, what is the workaround?
Hi @offchan42 , sorry for the late response here.
We are working on this feature now, and will have it done by the end of April.
Hello @squidarth Is this webhook feature available now? I also need it for whisper to translate lengthy videos and notified to my web end point whenever translation completes.
Hey @usman61 thanks for reaching out! We've just launched async inference on Baseten for beta testing and we'd love for you to try it out. Check out our docs on async inference here https://docs.baseten.co/invoke/async and the docs on the webhook integration here https://docs.baseten.co/invoke/async-secure. Feel free to reach out via our in-app chat or [email protected] with questions and feedback!
thanks for the very useful resources @spal1.
One minor issue in documentation which needs to be fixed. please update the node js code snippet for the Async Inference API endpoint from predict to async_predict.
@squidarth could you please inform the concerned person to update this documentation.
given below is the documentation link and the reference image.
https://docs.baseten.co/api-reference/production-async-predict