public-cloud-roadmap icon indicating copy to clipboard operation
public-cloud-roadmap copied to clipboard

Serverless / Container-as-a-Service

Open tanandy opened this issue 3 years ago • 12 comments

As a user I want to be able to use serverless containers for the public cloud I want to be able to consume these features through standard APIs / OVHcloud Customer panel Then i will be able to deploy highly scalable containerized applications on a fully managed serverless platform.

tanandy avatar Apr 13 '21 08:04 tanandy

@tanandy Managed Databases are indeed on the short term roadmap , do not hesitate to subscribe and vote on https://github.com/ovh/public-cloud-roadmap/issues/25 and share in comment your priorities.

Concerning serverless function and serverless containers we are still evaluating options. We don't see a clear, stable and totally reversible technology that would make a consensus at this stage, but I am interested in you wharing more details about your needs in terms of features/API features and billing model

mhurtrel avatar Apr 13 '21 08:04 mhurtrel

ok for all serverless stuff billing, we expect pay as you go model for the resource we use (ex: go used , traffic ...)

for example, we expected to get some higher disk space for managed serverless db that scale automatically (ex: 128 TO max) and dont worry a lot about disk space limit...

for functions its basically, beeing able to run some serverless architecture or just a basic task ...

for serverless containers, we need to be able to run easily containers without even managing it a lot (CaaS , PaaS)

tanandy avatar Apr 13 '21 08:04 tanandy

for the technology i think you could give a try to https://www.openfaas.com/
https://github.com/openfaas/faas and some french OVH concurrents made a consensus.

tanandy avatar Apr 13 '21 11:04 tanandy

@mhurtrel what about knative ?

tanandy avatar Dec 17 '21 14:12 tanandy

@mhurtrel i agree with @tanandy , knative was recently transferred to CNCF https://www.cncf.io/blog/2022/03/02/knative-accepted-as-a-cncf-incubating-project/

Many developers are now familiar with kubenertes, I hope OVH will support knative soon. I like to work with OVH since it's a french company.

But currently, in my company we plan to move out of OVH to gcp and their knative support on top of cloud run.

(Note: my company is subject to highly variable traffic, so we need a serverless context to avoid some future cloud billing issues, and scalability limitations)

So I hope the knative option will be checked ☺️

Moumouls avatar Mar 19 '22 22:03 Moumouls

@Moumouls @tanandy Indeed, we are currently observing some alignment on a couple of standards for this products, and will consider offering serverless capabilities in the mid term. I add this ticket to the backlog, open for comments.

Note that knative will be offering in our Hosted Private Cloud powered by Anthos in the short-term, so the scalability model (monthly commitment on baremetal nodes, that you can dynamically providion to a user cluster where you deployed knative serving and eventing) may not fit your use case with highly variable traffic @Moumouls

I am very interested in feedbacks on your use cases for a container-as-a-service product like this :

  • order of magnitude of scalability needed (scale to 0 ? cold starts acceptable timings ?)
  • localisation ?
  • stateteless, with high speed connectivity to object storage would be sufficient ou management of perisstent volumes would be a requirement ?
  • usage of competing technologies/products and the potential limits you hit

mhurtrel avatar Mar 20 '22 19:03 mhurtrel

@tanandy I edited the issue content and title to focus on CaaS/FaaS, excluding serverless databases, which are a quite different challenge/design. Do not hesitate to open a dedicated issue for this with detailed requirement, that would interest my colleague @baaastijn for sure

mhurtrel avatar Mar 20 '22 19:03 mhurtrel

Hi @mhurtrel

order of magnitude of scalability needed (scale to 0 ? cold starts acceptable timings ?)

Scale to 0 sure. About cold starts, in Caas on your side i suppose that the main goal will be to optimize image caching/availability in the region. A really first cold start of a new image version don't matter. But once the image is already used by your servers, we can expect nearly instant container start (may be by optimizing container/request scheduling). Another parameter about this. In Caas use case, and Knative, traffic is only routed to the container once the container is listening on the target port. So the cold start (if we don't scale to 0) only delay the scalability reactivity. And developers have many techniques to optimize the server listening, for example lazy module import in NodeJS.

Note: also to avoid many start & stop, a container should be stopped after may 15min without any requests. I do not expect to be charged for idle time of a container, or like GCP it could be a pricing option.

localisation ?

If the container registry is global, a multi region could be nice. But i think also GCP provide a specific service url for each region since it allow developers to choose their favorite load balancing service (like cloudflare) on top of the serverless provider.

stateteless, with high speed connectivity to object storage would be sufficient ou management of perisstent volumes would be a requirement ?

From my point of view, a serverless app should not rely on persistent volume (volume attachment could take lot of time). GCP for example do not support this feature (for obvious reasons). Not sure to understand the high speed connectivity to object storage. But yes the serverless service need to be deployed on many datacenters to be able to be close to other cloud services datacenters (from AWS, DO, GCP). For example, OVH is not heavily integrated in some 3rd party services (Mongo Atlas and their new Serverless feature for example), yes connectivity/localisation is important.

usage of competing technologies/products and the potential limits you hit

Today the most complete Caas seems GCP to me.

Pro:

  • CPU billing based on request cpu usage. An idle cpu is charged much less.
  • Knative interface on top of cloud run custom interface
  • GA available so my containers can run in the same datacenter as for example the Mongo Serverless service
  • IAM, identity support to be able to only accept request from specific sources (like CF)

Cons:

  • Knative interface not supported for Cron jobs
  • Pricing is really hard to predict if you do not have already stats on you infra usage/request/bandwidth

I currently do not checked how secrets are managed.

Moumouls avatar Mar 21 '22 08:03 Moumouls

@Moumouls Thanks a lot for this prompt and detailed feedback !

mhurtrel avatar Mar 21 '22 09:03 mhurtrel

i will split this in different issue since FaaS is different from CaaS. i will try to add details there when i have time

tanandy avatar Mar 21 '22 11:03 tanandy

Basic requirements:

  • Pay as you go
  • Automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring
  • Scale to zero
  • Cold Starts: yes (to be defined)

tanandy avatar Mar 22 '22 16:03 tanandy

I used to run devops / continuous deployment transformation teams and absolutely agree with @tanandy 's requirements. I would however say that a huge market differentiator would be to have some kind of prepaid credit as well where you can have fair warning and a shutoff (with short term mitigation steps) rather than a huge bill. It's a fear that hangs over the heads of smaller/independent businesses without strict controls available beyond billing alerts

aehlke avatar Sep 21 '22 01:09 aehlke

Support for Openstack Zun would be really nice and could hopefully facilitate the use of virtual-kubelet in kubernetes

dave-b-code avatar Nov 04 '22 13:11 dave-b-code