cloud-functions-go
cloud-functions-go copied to clipboard
Use docker_file to build node_modules and acquire additional libraries
This is a longer term FR:
At the moment, the build and test steps requires gcc,make nodejs. However, those are only needed if you want to deploy with functions.zip or run the test node server.
Suggestion is two part:
allow users to run cloud function alone locally using native toolchains and not deal with node and the execv.
The advantage of this is you don't need node, make, gcc or anything and can elect to acquire them during deployment by other means.
By other means i mean like build node_modules and execve entirely in docker and then "copy" them to your workstation just prior to creating functions.zip.
This would require the runtime binary to accept both sockets as arguments as well as listen on its own (eg:)
http.ListenAndServe(":8080", nil)
The following dockerfile is an example of this 'just in time' library/module support:
Makefile
all: node_modules lib bin_from_local
zip -FS -r $(OUT) bin/ lib/ node_modules index.js package.json -x *build*
bin_from_local:
<use go to compile ./main into bin/>
node_modules:
docker build -t docker_tmp -f dockerfiles/Dockerfile_node_modules .
docker cp `docker create docker_tmp`:/user_code/node_modules .
docker rmi -f docker_tmp
lib:
docker build -t docker_tmp -f dockerfiles/Dockerfile_lib .
docker cp `docker create docker_tmp`:/user_code/lib .
docker rmi -f docker_tmp
To compile node_modules:
Dockerfile_node_modules
FROM debian:jessie
ADD . /user_code
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
gcc \
npm \
build-essential \
&& rm -rf /var/lib/apt/lists/*
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get update && apt-get install -y nodejs
WORKDIR /user_code
RUN npm install --save local_modules/execer
To acquire additional shared objects (.so) files and copy them to lib/
FROM debian:jessie
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
libc6 \
libcurl3 \
libgcc1 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /user_code/lib && \
for i in `dpkg -L libc6 libcurl3 libgcc1 | egrep "^/usr/lib/x86_64-linux-gnu/.*\.so\."`; do cp $i /user_code/lib; done
I want to keep things simple in the simple case. I want to ensure that local development remains easy and has minimal dependencies.
I agree that a mode that packaged up the execer library without compiling it would be helpful. You are still going to need to build the Go binary, but compiling that is fast and cross-platform. Maybe there is a npm flag to just copy the code without compiling it.
I would be willing to add a Vagrant file with an optional docker provider. Would that help? That could help enable Windows development as well.
@iangudger I do think the local dev story and deployment will be pretty easy with the technique outlined above:
All that you'd really need is docker and native language's toolchains (in this case, just golang) and nothing else. Otherwise, you'd need node, make, gcc too.
Note, you can do windows development too here (i.,e docker will do the heavy lifting and just copy over the files from the temp container to your local system (i.,e execv node_module, any additional LD_LIBRARY_PATH libs, etc)
I'm fine with deferring this till a later though - this is just extra stuff that should simplify the dev/deployment. In the meantime, i'll post a question to Google Container Builder team to see how that can help with GCF deployments (i.,e a feature request that does these steps for you and 'injects' the files needed into the GCF runtime (vs me creating it and uploading it inside functions.zip)).
@ssttevee
@DazWilkin
We maybe able to use this sample to simplify build steps by offloading much of it google container builder+cloud source repo (i think).
Spent sometime refactoring the .NET variation of GCF with multi-stage builds.
It simplified the install/setup instructions considerably.
(basically use multistage builds to compile execer, install the required .NET shared_object, and finally 'copy them out' of the container to your laptop.
I can make a PR for a similar flow for this if you think it'd help here too.
@salrashid123 Feel free to make a PR. I am not personally a big fan of Docker, but maybe other people are.
Close -- I'm able to use Cloud Builder to deploy from the repo but the Container Builder for Go is causing me problems...
https://github.com/GoogleCloudPlatform/cloud-builders/issues/208
OK, it's working.
export PROJECT=[[YOUR-PROJECT]]
export BILLING=[[YOUR-BILLING]]
gcloud projects create $PROJECT
gcloud beta billing projects link $PROJECT --billing-account=$BILLING
gcloud services enable cloudfunctions.googleapis.com --project=$PROJECT
gcloud services enable cloudbuild.googleapis.com --project=$PROJECT
# Permit Container Builder to deploy Cloud Functions
NUM=$(gcloud projects describe $PROJECT \
--format='value(projectNumber)')
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${NUM}@cloudbuild.gserviceaccount.com \
--role=roles/cloudfunctions.developer
Add to the cloned directory (1) .gitignore
:
build
/examples
/nodego
*.go
*.md
*.sh
function.zip
Makefile
Vagrantfile
Then (2) cloudbuild.yaml
:
steps:
- name: "gcr.io/cloud-builders/go:debian"
args: ["build", "-tags", "netgo", "-tags","node","main.go"]
env: [
"GOARCH=amd64",
"GOOS=linux",
"GOPATH=."
]
- name: "gcr.io/cloud-builders/npm"
args: ["install", "--ignore-scripts", "-save", "local_modules/execer"]
- name: "gcr.io/cloud-builders/gcloud"
args: [
"beta",
"functions",
"deploy","helloworld",
"--entry-point","helloworld",
"--source","./",
"--trigger-http",
"--project","${PROJECT_ID}"
]
Then create a build trigger from (your fork of the) GitHub repo:
Pushing the changes of the additional of the .gitignore
and cloudbuild.yaml
:
git push
Will trigger the Container Builder build:
- Compile the Golang
- Npm install
- gcloud beta functions deploy (from local)
and:
and:
and, of course:
curl --request GET https://us-central1-[[YOUR-PROJECT]].cloudfunctions.net/helloworld
Hello, I'm native Go!