kube-thanos icon indicating copy to clipboard operation
kube-thanos copied to clipboard

How-to add nodeSelector restriction to components?

Open sdurrheimer opened this issue 4 years ago • 9 comments

Hi there,

Before the recent refactor, I was able to add by nodeSelector restriction, like the following example:

local s = 
  t.store + 
  t.store.withVolumeClaimTemplate + 
  t.store.withServiceMonitor + 
  commonConfig + {
    config+:: {
      name: 'thanos-store',
      replicas: 1,
    },
    statefulSet+: {
      spec+: {
        template+: {
          spec+: {
            nodeSelector+: {
              'k8s.scaleway.com/pool-name': 'kubernetes-infra',
            },
          },
        },
      },
    },
  };

With the new format, how can I achieve the same behavior?

I tried the following:

local s = t.store(commonConfig {
  replicas: 1,
  serviceMonitor: true,
  statefulSet+: {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    },
  },
});

Thanks in advance,

sdurrheimer avatar Dec 02 '20 15:12 sdurrheimer

Okay looks like the current way to do this is:

local s = t.store(commonConfig {
  replicas: 1,
  serviceMonitor: true,
}) + {
  statefulSet+: {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    },
  },
};

but having a config field for that would be prettier.

sdurrheimer avatar Dec 03 '20 10:12 sdurrheimer

I do agree with you, it will make resource selection, and even resource claims DRY

Blizter avatar Dec 10 '20 19:12 Blizter

This is exactly the right way to do it. :+1: What you came up with is the way things are supposed to be extended by people on their own. Having the Kubernetes specifics in the config object isn't supposed to happen for the most part. Again, this is exactly what I would have had in mind for this too!

metalmatze avatar Jan 08 '21 10:01 metalmatze

And about to add node selector in store shards. How do we do this?

local strs = t.storeShards(commonConfig {
  shards: 3,
  replicas: 3,
  serviceMonitor: true,
  bucketCache: {
    type: 'memcached',
    config+: {
      // NOTICE: <MEMCACHED_SERCIVE> is a placeholder to generate examples.
      // List of memcached addresses, that will get resolved with the DNS service discovery provider.
      // For DNS service discovery reference https://thanos.io/service-discovery.md/#dns-service-discovery
      addresses: ['memcached.%s.svc.cluster.local:11211' % commonConfig.namespace],
    },
  },
  indexCache: {
    type: 'memcached',
    config+: {
      // NOTICE: <MEMCACHED_SERCIVE> is a placeholder to generate examples.
      // List of memcached addresses, that will get resolved with the DNS service discovery provider.
      // For DNS service discovery reference https://thanos.io/service-discovery.md/#dns-service-discovery
      addresses: ['memcached.%s.svc.cluster.local:11211' % commonConfig.namespace],
    },
  },
}) + {
  statefulSet+: {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    },
  },
};

This is not adding it.

betorvs avatar Feb 25 '21 14:02 betorvs

@betorvs By looking here, we can see that the store shards need an additional shards {} enclosure.

You can try:

}) + {
  shards+: {
    statefulSet+: {
      spec+: {
        template+: {
          spec+: {
            nodeSelector+: {
              '<label>': '<label-value>',
            },
          },
        },
      },
    },
  },
};

And maybe even one more as it's looping over the number of shards.

sdurrheimer avatar Feb 26 '21 10:02 sdurrheimer

@sdurrheimer thanks for replying it. But when I added it:

RUNTIME ERROR: Field does not exist: service
        local.jsonnet:262:21-47 thunk from <object <anonymous>>
        local.jsonnet:259:52-59
        <std>:715:15-22 thunk <val> from <function <format_codes_arr>>
        <std>:722:27-30 thunk from <thunk <s> from <function <format_codes_arr>>>
        <std>:592:22-25 thunk from <function <format_code>>
        <std>:592:9-26  function <format_code>
        <std>:722:15-60 thunk <s> from <function <format_codes_arr>>
        <std>:727:24-25 thunk from <thunk <s_padded> from <function <format_codes_arr>>>
        <std>:480:30-33 thunk from <thunk from <function <pad_left>>>
        <std>:480:19-34 thunk from <function <pad_left>>
        ...
        vendor/github.com/thanos-io/kube-thanos/jsonnet/kube-thanos/kube-thanos-query.libsonnet:92:11-31
        Array element 11
        Field "args"
        Array element 0
        Field "containers"
        Field "spec"
        Field "template"
        Field "spec"
        Field "thanos-query-deployment"
        During manifestation

I found another way to add it. I add a new variable:

local test = {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    }
};

Then I changed shards for:

{
  ['store-' + shard + '-' + name]: strs.shards[shard][name] + test
  for shard in std.objectFields(strs.shards) 
  for name in std.objectFields(strs.shards[shard]) 
  if strs.shards[shard][name] != null
} +

It works. I'm not sure if this is the best solution. I'm pretty new using jsonnet.

betorvs avatar Feb 26 '21 10:02 betorvs

no. it didn't work. It creates nodeSelector inside services too.

betorvs avatar Mar 02 '21 09:03 betorvs

Curious about this as well. I'm unable to get a nodeSelector working with sharded thanos store. I'm not super familiar with jsonnet either. Following the example here https://github.com/thanos-io/kube-thanos/issues/181#issuecomment-786558196 gives me the error

RUNTIME ERROR: field does not exist: service
        thanos.jsonnet:134:21-47        thunk <service>
        thanos.jsonnet:133:52-59        thunk <array_element>
        std.jsonnet:708:15-22   thunk <val>
        std.jsonnet:715:27-30   thunk <val>
        std.jsonnet:585:22-25   thunk <a>
        std.jsonnet:36:17
        std.jsonnet:36:8-19     thunk <a>
        std.jsonnet:36:8-31     function <anonymous>
        std.jsonnet:36:8-31     function <anonymous>
        std.jsonnet:585:9-26    function <format_code>
        ...
        std.jsonnet:237:7-23    function <anonymous>
        vendor/kube-thanos/kube-thanos-query.libsonnet:110:11-31        thunk <array_element>
        vendor/kube-thanos/kube-thanos-query.libsonnet:(100:9)-(142:10) object <c>
        vendor/kube-thanos/kube-thanos-query.libsonnet:188:26   thunk <array_element>
        vendor/kube-thanos/kube-thanos-query.libsonnet:188:25-28        object <anonymous>
        thanos.jsonnet:(139:18)-(145:8) object <anonymous>
        thanos.jsonnet:(138:12)-(146:6) object <anonymous>
        thanos.jsonnet:(137:16)-(147:4) object <anonymous>
        thanos.jsonnet:208:1-70 object <anonymous>
        During manifestation

dschaaff avatar Aug 24 '21 00:08 dschaaff

I figured this out FYI

local storeShardCount = 9;
local strs = t.storeShards(commonConfig.config {
  replicas: 2,
  shards: storeShardCount,
  serviceMonitor: true,
  resources: {
    requests: {
      cpu: '6000m',
      memory: '18G',
    },
    limits: {
      memory: '20G',
    },
  },
  bucketCache: {
    type: 'memcached',
    config+: {
      addresses: ['dnssrv+_memcache._tcp.thanos-store-memcached.monitoring.svc.cluster.local'],
      max_item_size: '16MiB',
      max_async_buffer_size: 100000,
      max_get_multi_batch_size: 1000,
      max_idle_connections: 750,
      timeout: '750ms',
    },
  },
  indexCache: {
    type: 'memcached',
    config+: {
      addresses: ['dnssrv+_memcache._tcp.thanos-store-memcached.monitoring.svc.cluster.local'],
      max_item_size: '16MiB',
      max_async_buffer_size: 100000,
      max_get_multi_batch_size: 1000,
      max_idle_connections: 750,
      timeout: '750ms',
    },
  },
}) + {
  shards+: {
    ['shard%d' % i ]+: {
     statefulSet+: {
        spec+: {
          template+: {
            spec+: {
              nodeSelector: {
                'kubernetes.io/arch': 'amd64',
              },
            },
          },
        },
      },
    },
    for i in std.range(0, storeShardCount - 1 )
  },
};

dschaaff avatar Aug 31 '22 18:08 dschaaff