Feature request: allow passing the full OpenAPI config to the serverless runtime image as an environment variable
I'd like to request allowing passing the full value of the ~~OpenAPI~~ Service Management config as an environment variable, rather than just setting the path to the config as an environment variable.
Example:
gcloud run deploy ... \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:2" \
--set-env-vars=ENDPOINTS_SERVICE_CONFIG="{ ... }" \
This has several advantages, most notably that:
- The step of building your own image is eliminated.
- Updating the configuration becomes much faster because you only need to change an environment variable.
- Overall simplification.
I also think you don't have to get rid of the old configuration methods, you can just add this as another option, similar to how both ENDPOINTS_SERVICE_NAME and ENDPOINTS_SERVICE_PATH are still available.
The first difficulty is CLI "gcloud run deploy" is owned by Google Cloud Run team. It is a general purpose CLI, not just for cloud endpoints. It will be very hard to justify a feature just for cloud endpoints.
What we could do is to provide a shell script to achieve this with following steps:
- deploy ESPv2 to a Cloud run service to get its host name
- replace "host" name field with Cloud run host name in the openapi spec.
- run CLI "gcloud endpoints deploy" for the service config
- build docker image with the service config
- run "gcloud run deploy" with the new docker image
It will be very hard to justify a feature just for cloud endpoints
I'm not sure I understand.
Is --set-env-vars not a general-purpose argument for setting any environment variable? By my understanding it should already be possible to set this environment variable using gcloud run deploy without making any changes to that tool, and then the docker image only needs to read it.
I did not mean this as a change specific to gcloud run deploy either, it's just an example of one way you would set the environment variable, so I'm a little lost.
I see your point. In that case, all 5 tasks I listed above have to be done when each Cloud Run docker instance starts. It will be very bad for cold start. Docker Run instance is a scale to zero service, if there is not traffic, all instances are killed. When a new request comes, a new instance is started (cold start), then all these 5 tasks will need to done again.
I see, I think I misunderstood the purpose of ENDPOINTS_SERVICE_PATH exactly and gave a bad example, but I believe the idea of passing in the config via an environment variable will still work.
Right now the sequence for building and starting the image are as follows:
- Build a customized espv2 docker image with the
ENDPOINTS_SERVICE_PATHenvironment variable built in - Deploy the customized docker image, ex. via
gcloud run deploy - Cold start
- Read the
ENDPOINTS_SERVICE_PATHenvironment variable that was built into the docker image at build time - Start the proxy
- Read the
The sequence I'd like to propose is as follows:
- Do not build a custom image
- Deploy the standard docker image, ex. via
gcloud run deploy, with the full config JSON passed to the instance as an environment variableENDPOINTS_SERVICE_CONFIG. It contains the exact same contents that would have been in the fileENDPOINTS_SERVICE_PATHwith the other approach. - Cold start
- Read the
ENDPOINTS_SERVICE_CONFIGenvironment variable - Write the config to disk if needed
- Start the proxy
- Read the
So, there is some possible actions that need to happen on start, but nowhere near as bad as making an API request.
There is an important step your proposal is missing; ESPv2 could not work on the OpenAPI JSON directly. It needs to push to Google ServiceManagement service by CLI "gcloud endpoints deploy", then fetch the new config from it. ESPv2 only works with the config processed by Google ServiceManagement service. We also should:
- we should not perform this on every cold start
- the processed service config should be build into docker image as a local file.
ESPv2 could not work on the OpenAPI JSON directly
That was my original misunderstanding. However, what I am now saying is that you could pass the config processed by Google ServiceManagement as an environment variable, not the OpenAPI config. The principle is the same, I just misunderstood the contents of the config. Apologies for the confusion.
the processed service config should be build into docker image as a local file.
Are you saying it's too much slower to read from an environment variable versus the disk? Just want to be sure I understand why. If it's that bad then I suppose this approach won't work.
I see. You want to pass the whole service config (Google ServieManagement processed) as environment variable to ESPv2 docker image. It could be very big, not sure if an environment variable can hold it. If gRPC transcoding is required, the config includes proto descriptor, which could be 5 to 10 MB.