Mattt

Results 266 comments of Mattt

Hi, @Varun2101. Thanks for sharing this feedback. To your second point, you can get more control over the behavior of a model on Replicate by creating a [deployment](https://replicate.com/docs/deployments). I don't...

Hi @RichardNeill. Thanks for opening this issue. I'm happy to report that I just merged https://github.com/replicate/replicate-python/pull/263, which should help make API errors more understandable and actionable. ```python import replicate from...

Hi @jpiabrantes. A `ModelError` is an error returned by the model, so there's nothing to be done from the client. Could you share a link to the model — or...

The latest release of the Python client library ([0.29.0](https://github.com/replicate/replicate-python/releases/tag/0.29.0)) adds a `prediction` field to `ModelError` to help debug and respond to failures. If you're still seeing these issues intermittently, you...

Hi @jdkanu. Thank you for reporting this. Looking at our telemetry, it does seem like predictions on GPUs in certain regions are failing more often due to read timeouts. We're...

Hi @jdkanu. To clarify, the error you're seeing is a problem with the model rather than the Python client itself. Looking at the schema for [meta/llama-2-7b](https://replicate.com/meta/llama-2-7b), `stop_sequences` is documented as:...

Hi @UmarRamzan. I hear you — large models can take a while to setup from a cold boot. We do what we can to optimize network storage and caches, but...

Hi @kartikwar. Sorry to hear this isn't working as expected. Can you please go to your replicate.com dashboard, locate your training record, and share its ID?

Hi @kartikwar. As of [0.26.1](https://github.com/replicate/replicate-python/releases/tag/0.26.1), destination should no longer be `None` for successful trainings.

Hi @charliemday. No, Replicate doesn't currently implement a batch processing API like OpenAI. It's something we're considering, though. Can you share more about your intended use case?