trpc-svelte-query-adapter icon indicating copy to clipboard operation
trpc-svelte-query-adapter copied to clipboard

How to stream queries to Pages/Components?

Open dihmeetree opened this issue 1 year ago • 31 comments

Currently the steering committee are voting + service desk access members in https://github.com/cncf/foundation/blob/main/project-maintainers.csv

A few relevant SIGs have service desk access only (Contributor Experience, Infra, Release).

Following https://github.com/cncf/foundation/pull/223/files voting changed from fractional per project to one maintainer one vote.

The Steering Committee has discussed adding SIG Leads, that is Chairs and Tech Leads to the maintainers list in a non-service-desk but voting role.

We think this will reasonably improve the representation of the project, all other subgroups should have representation via Steerings and their SIG Leads.

For the service desk specifically, we already have sufficient representation across the SIGs that are responsible for providing services to the project and Steering.

This issue will track voting on these changes because relevant policy docs currently live across multiple repos in multiple locations.

/assign @kubernetes/steering-committee

dihmeetree avatar Feb 06 '24 17:02 dihmeetree

/kind failing-test

pacoxu avatar Jan 24 '24 09:01 pacoxu

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: pacoxu Once this PR has been reviewed and has the lgtm label, please ask for approval from saschagrunert. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Jan 24 '24 09:01 k8s-ci-robot

https://github.com/kubernetes/release/blob/d5a2bc91d8336b3962768676d03d57ca7f74078a/images/releng/k8s-ci-builder/Makefile#L20

The ci is using default.

pacoxu avatar Jan 24 '24 09:01 pacoxu

root@65317feff64c:/# systemd --version
systemd 252 (252.19-1~deb12u1)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
root@65317feff64c:/# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"

I add the last commit to comment it.

https://github.com/docker/cli/issues/4807#issuecomment-1903950217

In debian booworm, it sounds like enough.

root@65317feff64c:/# sed -i 's/ulimit -Hn/# ulimit -Hn/g' /etc/init.d/docker
root@65317feff64c:/#   service docker start
Starting Docker: docker.

root@65317feff64c:/# cat /etc/init.d/docker | grep ulimit
		# ulimit -Hn 524288
root@65317feff64c:/# ulimit -Sn
1048576
root@65317feff64c:/# ulimit -Hn
1048576

1048576 > 524288

pacoxu avatar Jan 24 '24 10:01 pacoxu

/hold for discussion

xmudrii avatar Jan 24 '24 11:01 xmudrii

Thank you for opening this PR! Generally this looks good, however, this change is not as simple as it might seem and I'd like that we discuss it more in-depth.

If we update k8s-ci-builder, we should/must also update k8s-cloud-builder. Otherwise, we'll be releasing binaries using bullseye, but test binaries using bookworm. While it might be fine, it poses some risk that we should take into the consideration.

However, if we update k8s-cloud-builder, we'll be running into https://github.com/kubernetes/release/issues/3246. This might be acceptable for v1.30, but IMO it's not acceptable for earlier releases as it'll change what operating system versions we support.

As mentioned in the issue (https://github.com/kubernetes/kubernetes/issues/122939#issuecomment-1907945390), let's see how feasible it is to pin Docker to a lower version. That way we don't affect users while we'll hopefully unblock CI.

xmudrii avatar Jan 24 '24 11:01 xmudrii

However, if we update k8s-cloud-builder, we'll be running into #3246. This might be acceptable for v1.30, but IMO it's not acceptable for earlier releases as it'll change what operating system versions we support.

If so, I'd like to fix the issue with lower docker version now.

Besides, we should think about change the minimal kernel version plan. I have an open issue in https://github.com/kubernetes/kubernetes/issues/116799.

pacoxu avatar Jan 25 '24 06:01 pacoxu

@xmudrii thanks for your detailed explanation.

I opened https://github.com/kubernetes/release/pull/3430 to lock the docker version.

This may need more investigation for https://github.com/kubernetes/release/issues/3246.

pacoxu avatar Jan 25 '24 06:01 pacoxu

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 26 '24 18:01 k8s-ci-robot

@pacoxu Can we close this PR for now?

xmudrii avatar Jan 28 '24 20:01 xmudrii

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 27 '24 20:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 27 '24 20:05 k8s-triage-robot

/close

xmudrii avatar May 27 '24 21:05 xmudrii

@xmudrii: Closed this PR.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar May 27 '24 21:05 k8s-ci-robot

@pacoxu Please feel free to reopen and rebase the PR if you think it's still needed!

xmudrii avatar May 27 '24 21:05 xmudrii