feedback
feedback copied to clipboard
[how-to-run-inference-cloud-run-gpu-vllm]:
- I had to edit the imagte url from the default upon employment
- max instances must be set to 3 in order to set gpu requirements
Thank you for the feedback! I fixed the max instances and a few other things.
What is the issue you encountered with the image url? What was the error you encountered?