How to estimate the cost of a prediction?
To calculate the cost of a prediction 3 values are needed
- Prediction time (returned as part of
GET prediction) - Hardware on which the model was run
- Price/sec of running that hardware (values present here)
Given this we can approximately calculate price as time*pricing. However, the GET prediction api does not return what hardware the prediction ran on even though the website shows this information. Would it be possible to return the hardware as part of response to GET prediction? If not, is there any other way to calculate the inference cost
It's very strange that there is no API to find out the cost. Maybe they are fighting against API resale?
So um, there is no way to get the prediction cost? I've migrated to stream url and discovered that there is no way to get the cost, only tokens and prediction time