serving
serving copied to clipboard
Make Knative Serving running on Edge (k0s/microshift/...)
Describe the feature
I am curious to see what kind of issues / limitations we will have when we want to run Knative Serving on something like k0s or Microshift.
With leveraging mink and creating a single binary for Knative + Kube with k0s, we could achieve very minimal footprint.
FYI, @csantanapr and I want to make this experiment done by a GSoC contributor this summer.
Thanks for opening the issue @aliok 👍
A few other random thoughts
- is what parts of knative we can turn off or remove that are not need it at edge worker node
- worker node with control plane components vs worker edge node (ie containerd, kubelet, Knative user Pod( container+queueproxy))
- replace queproxy sidecard with a single queue proxy agent (ebpf) on the node
- benchmark with a baseline and automate testing measuring footprint with metrics
Assumptions for edge device?
- no HA need it, can wait for things to be recreated
- max limit always 1, things can be queue eventually everything is process by the one use pod
- slow connection between edge worker node and control plane node. How the slow connection affects connections. Where are the ingress and activator live?
- is what parts of knative we can turn off or remove that are not need it at edge worker node worker node with control plane components vs worker edge node (ie containerd, kubelet, Knative user Pod( container+queueproxy))
Big +1 on this one. This is the most interesting thing for me.
- replace queproxy sidecard with a single queue proxy agent (ebpf) on the node
Sounds great but maybe optional.
- benchmark with a baseline and automate testing measuring footprint with metrics
What would be the baseline? But, for sure I would be interested in seeing something like "on a RaspPi XYZ device, system was able to handle this and that"
Assumptions for edge device?
- no HA need it, can wait for things to be recreated
+1
- max limit always 1, things can be queue eventually everything is process by the one use pod
Queued where?
- slow connection between edge worker node and control plane node. How the slow connection affects connections. Where are the ingress and activator live?
"Where are the ingress and activator live?" is a tough question :)
Well, I don't know many of the answers and I see this as a research project that we can guide a contributor.
yes all my questions and comments, are for the summer students to look into it and research with us
The intention was not to answer them today, but answering them together during the summer internship
Hi all. I am Excel, and I im interested in contributing to this project for GSoC '22. I have joined the community, and have been looking into Knative too. I am thinking of attending the event that was posted on the community to get up to speed with Knative in general.
Is this alright? And are there any prerequisites I need to complete to increase my chances, or to write a good proposal? Thank you!
Hi Rahul One way to increase your chances is to include in your proposals Additional alternatives to investigate and provide more details on it. For example, what about using Talos is k8s native operating system? What about if the operating system also included knative capabilities? Would that make it more efficient footprint-wise?
can I work on this issue?
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
/reopen
/remove-lifecycle stale.
Is this project lifecycle out of date for GSoC 2023? @csantanapr @aliok
/reopen
This issue is listed for GSoC 2023 here: https://github.com/cncf/mentoring/blob/main/summerofcode/2023.md#porting-knative-serving-to-microshift
@aliok: Reopened this issue.
In response to this:
/reopen
This issue is listed for GSoC 2023 here: https://github.com/cncf/mentoring/blob/main/summerofcode/2023.md#porting-knative-serving-to-microshift
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Hi
I have experimented and installed Knative serving in K0s as part of the proposal for GSoC '23 . I am adding a reference here in case someone wants to run knative serving in k0s with minimal resource. Knative Serving in k0s
@naveenrajm7 This is an awesome investigation and example
I would love to have your write up as a blog post on our website knative.dev in time for KubeCon EU April 18th
Would you be willing to create a PR to add a blog post? https://github.com/knative/docs
I think further optimizations can be done, but what you have accomplished is a good investigation 🚀
Hi @csantanapr
Great to hear this from you. Here is the PR for blog post https://github.com/knative/docs/pull/5514
Yes, In this work just resource reduction was done and no real optimization was performed. I would like to explore optimizing Knative for Edge as part of GSoC and would appreciate your pointers/guidance on my GSoC proposal
Thanks!
/triage accepted
Hi @aliok This didn't make it into GSoC'23. Would it be possible for this to go in LFX Mentorship Term 02 - 2023 June - August .
We are working towards adding ideas to LFX Mentorship as well.
5 May 2023 Cum 09:10 tarihinde Naveenraj M @.***> şunu yazdı:
Hi @aliok https://github.com/aliok This didn't make it into GSoC'23. Would it be possible for this to go in LFX Mentorship Term 02 - 2023 June
- August https://github.com/cncf/mentoring/blob/main/programs/lfx-mentorship/2023/02-Jun-Aug/project_ideas.md .
— Reply to this email directly, view it on GitHub https://github.com/knative/serving/issues/12718#issuecomment-1535759835, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAC37HCCI5AVMHPESM33NALXESKV3ANCNFSM5QLXJS7Q . You are receiving this because you were mentioned.Message ID: @.***>
@naveenrajm7 I created https://github.com/cncf/mentoring/pull/955
Hii @aliok, This proposal looks interesting. I would to like to apply for this in LFX mentorship. It will be really helpful if you can tell me the prerequisites for this.
Hi @ReToCode @skonto,
I wanted to express my interest in this mentorship program and inform you that I have already applied through the LFX console. I have a solid background in computer science and have gained experience in developing event-triggered functions using Cloud Run, Cloud Functions, and Lambda.
I am particularly excited to learn more about implementing GoLang and practicing Knative on minimal Kubernetes resources. To kickstart my journey, I plan to follow the document titled "Knative Serving in k0s" available at Knative Serving in k0s. Additionally, I am eager to explore the possibilities of utilizing Knative on edge environments.
Thank you for considering my application. I look forward to the opportunity to learn and contribute.
@aliok is there anyone still working on this issue
@naveenrajm7 did finish the work on this and did a write-up here. @naveenrajm7 do you still plan to write a blog-post about this?
Hi I did write a blog post on this project which featured in the CNCF LFX graduation post , the original post is here. As you pointed out the technical report is in my github repo
Ok, I meant on the Knative Blog as well :) But the existing one should be sufficient, so I'm going to close this issue.