Proposal: support multi-arch functions
Current Behaviour
As for now if I want to have a function, which runs on amd64, and ppc64le, I have to do following:
- Create a new function
$ faas-cli new function --lang golang-http
- Change function.yml to have 2 functions for 2 architectures:
version: 1.0
provider:
name: openfaas
gateway: http://gateway:31112
functions:
function-amd64:
lang: golang-http
handler: ./function
image: id/function:latest-amd64
function-ppc64le:
lang: golang-http-ppc64le
handler: ./function
image: id/function:latest-ppc64le
function:
image: id/function:latest
- Build 2 functions:
$ faas-cli build -f ./function.yml
- Push them to docker
- Create manifest
$ docker manifest create id/function:latest id/function:latest-amd64 id/function:latest-ppc64le
- Push manifest
$ docker manifest push id/function:latest
- deploy the function
$ faas-cli deploy -f ./function.yml --filter function
Expected Behaviour
Something like:
faas-cli up -f ./function.yml -a amd64,pcc64le
Context
I just want to build functions on my mac, test them on mac and on intel, and deploy them into production anywhere.
Your Environment
- FaaS-CLI version ( Full output from:
faas-cli version): 0.8.21 - Docker version ( Full output from:
docker version): 18.06.0-ce - Are you using Docker Swarm (FaaS-swarm ) or Kubernetes (FaaS-netes)? Kubernetes
- Operating System and version (e.g. Linux, Windows, MacOS): MacOS, Linux
Thanks for your suggestion.
How do you propose to build ppc64le on your local computer? What happens if someone is using kaniko or the faas-cli build --shrinkwrap option as used in OpenFaaS Cloud?
It is a real example, I built it on my mac before posting and deployed the same function on 2 clusters - one on x86 and one on ppc64le.
golang-http/Dockerfile:
FROM openfaas/of-watchdog:0.5.3 as watchdog
FROM golang:1.10.4-alpine3.8 as build
...
FROM alpine:3.8
golang-http-ppc64le/Dockerfile:
FROM powerlinux/of-watchdog:latest-dev-ppc64le as watchdog
FROM ppc64le/golang:1.10.4-alpine3.8 as build
...
FROM ppc64le/alpine:3.8
It is the most important difference between two of them.
kaniko/shrinkwrap I have to check, but I think, in the "normal" world you don't need such multi-arch functions. If you know, that your whole cloud runs on x86 hardware, you don't need - just build with default arch and everything works fine.
It is more feature for people who have a lot of different hardware for different purposes. Sometimes you can test at home on raspberry pi, than run somewhere in the cloud to show it to a customer, and at the end you deploy it on ppc64le in production. It could be an easy way for a programmer to achieve it. Of cource it means, that you have all the templates for each architecture and you system is able to cross-build programs (I like go! :) )
@alexellis do we consider this complete now?