kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

patchesStrategicMerge is not working for service of single port but multiple protocols.

Open yydzhou opened this issue 3 years ago • 2 comments

Describe the bug base: service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80

overlay: service-patch.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: rtmpk
    port: 1986
    protocol: UDP
    targetPort: 1986
  - name: rtmp
    port: 1935
    targetPort: 1935
  - name: rtmpq
    port: 1935
    protocol: UDP
    targetPort: 1935
  - name: https
    port: 443
    targetPort: 443
  - name: http3
    port: 443
    protocol: UDP
    targetPort: 443

kustomization.yaml

resources:
- ../../base
patchesStrategicMerge:
- service-patch.yaml

Files that can reproduce the issue

Expected output expected output

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: rtmpk
    port: 1986
    protocol: UDP
    targetPort: 1986
  - name: rtmp
    port: 1935
    targetPort: 1935
  - name: rtmpq
    port: 1935
    protocol: UDP
    targetPort: 1935
  - name: https
    port: 443
    targetPort: 443
  - name: http3
    port: 443
    protocol: UDP
    targetPort: 443
   - name: http
    port: 80
    protocol: TCP
    targetPort: 80

Actual output Actually output:

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: rtmpk
    port: 1986
    protocol: UDP
    targetPort: 1986
  - name: rtmpq
    port: 1935
    protocol: UDP
    targetPort: 1935
  - name: http3
    port: 443
    protocol: UDP
    targetPort: 443
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80

Kustomize version {Version:kustomize/v4.5.7 GitCommit:56d82a8378dfc8dc3b3b1085e5a6e67b82966bd7 BuildDate:2022-08-02T16:28:01Z GoOs:darwin GoArch:amd64}

Platform

Additional context

yydzhou avatar Aug 09 '22 03:08 yydzhou

Using similar port configuration for a deployment merge test, does not work neither.

yydzhou avatar Aug 09 '22 03:08 yydzhou

/triage accepted

This reproduces. cc @natasha41575 -- I know you've done a lot of work with port merging in the past

As a workaround, the result seems correct if you explicitly specify the TCP protocol on the ports that are relying on the default in the sample.

KnVerey avatar Aug 16 '22 23:08 KnVerey

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 14 '22 23:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 14 '22 23:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jan 14 '23 00:01 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jan 14 '23 00:01 k8s-ci-robot

/assign

DiptoChakrabarty avatar Oct 18 '23 17:10 DiptoChakrabarty

Unassigning due to lack of activity

natasha41575 avatar Dec 05 '23 22:12 natasha41575