YevhenLodovyi
YevhenLodovyi
I have the same situation, I updated to v0.27.2 and now i have less messages, but still have them from time to time. Also it is pull requests that were...
> Still think this behavior is a bug. +1
I had a simple case: I wanted to import `aws_cloudwatch_log_group` resource in my lambda module, but: ``` Import blocks are only allowed in the root module. ``` I call this...
here is log(karpenter recreates the node since it is unused): ``` {"level":"INFO","time":"2024-04-15T15:22:08.722Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-vqs8n","provider-id":"aws:///eu-west-1b/i-0f4c920baa69adb22","node":"ip-10-50-60-191.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919632Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:22:43.788Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-vhb92","provider-id":"aws:///eu-west-1b/i-0aedf7e199d02b99c","node":"ip-10-50-42-90.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919632Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:23:29.650Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-zl275","provider-id":"aws:///eu-west-1a/i-0691ddc4f2b328ee9","node":"ip-10-50-24-213.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919624Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:24:07.022Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-zg2mc","provider-id":"aws:///eu-west-1b/i-0f14add06a95b358e","node":"ip-10-50-62-117.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919632Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:24:49.475Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-524jt","provider-id":"aws:///eu-west-1b/i-03c18b6fac9ec4f24","node":"ip-10-50-55-252.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919632Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:25:24.434Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-t6wmw","provider-id":"aws:///eu-west-1c/i-05afe3059876ffbae","node":"ip-10-50-78-36.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919632Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:26:03.199Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-b726z","provider-id":"aws:///eu-west-1b/i-0afd37e0ea042f8bc","node":"ip-10-50-55-186.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919632Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:26:45.368Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-fqqdl","provider-id":"aws:///eu-west-1b/i-09f93e7d7cc430b0c","node":"ip-10-50-53-147.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919632Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:27:18.853Z","logger":"controller.nodeclaim.lifecycle","message":"initialized nodeclaim","commit":"17dd42b","nodeclaim":"ondemand-default-sflpq","provider-id":"aws:///eu-west-1b/i-088030c9309fb4fd1","node":"ip-10-50-57-122.eu-west-1.compute.internal","allocatable":{"cpu":"3920m","ephemeral-storage":"47233297124","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"6919624Ki","pods":"110","vpc.amazonaws.com/pod-eni":"18"}} {"level":"INFO","time":"2024-04-15T15:27:58.014Z","logger":"controller.nodeclaim.lifecycle","message":"initialized...
@jigisha620 sure, here you are: It is quite strange issue to be honest, I have ~1k pods and only one pod configuration is affected... Also I can reproduce the issue...
``` ❯ k -n karpenter describe pod karpenter-5f4b874cc8-25wjd | grep VM_MEMORY_OVERHEAD_PERCENT VM_MEMORY_OVERHEAD_PERCENT: 0.075 ```
I use default helm chart and do not have any customisation: ```yaml karpenter: serviceAccount: create: false name: karpenter settings: featureGates: spotToSpotConsolidation: true ```
> Were you able to resolve the problem that you were facing? I changed the resources configuration for my app deployment to workaround the issue. It is not a fix,...
yes, as i mentioned it is not easy to reproduce. I have no idea why. I managed to reproduce the issue in two EKS clusters 3 times(i made ~10 attempts)...
@cyriltovena Hi, sorry for troubling. I am using fluent-bit to send logs to loki 3.0.0. I decided no try non-released version of loki(due to fixes in loki-bloom feature), but noticed...