scaleway-k8s-node-coffee
scaleway-k8s-node-coffee copied to clipboard
[Reserved IP] WARNING no available reserved IPs for node
Hello,
After deployed the controller to a Kubernetes Kapsule cluster at Scaleway
kubectl create -f https://raw.githubusercontent.com/Sh4d1/scaleway-k8s-node-coffee/main/deploy.yaml
set the secret
apiVersion: v1
stringData:
SCW_ACCESS_KEY: XXXXXXXXXXX
SCW_SECRET_KEY: XXXXXXXXXXX
SCW_DEFAULT_ZONE: nl-ams-1
SCW_DEFAULT_REGION: nl-ams
kind: Secret
metadata:
name: scaleway-k8s-node-coffee
namespace: scaleway-k8s-node-coffee
type: Opaque
set the configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: scaleway-k8s-node-coffee
namespace: scaleway-k8s-node-coffee
data:
REVERSE_IP_DOMAIN: ""
DATABASE_IDS: ""
REDIS_IDS: ""
RESERVED_IPS_POOL: "51.15.15.32"
SECURITY_GROUP_IDS: ""
RETRIES_NUMBER: "30"
Deployed a generic deployment that pointed to a node pool (with autoscaling on)
I got the below warning after the first node came up
I0915 13:46:52.944681 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-c7c12842e8
W0915 13:46:53.241201 1 reserved_ip.go:60] no available reserved IPs for node scw-t2-k8s-dev-pool-test-reservedip-c7c12842e8
Checking the related code
if err != nil {
klog.Errorf("could not get a free IP for node %s: %v", nodeName, err)
return err
}
if ip == nil {
klog.Warningf("no available reserved IPs for node %s", nodeName)
return nil
}
It seems that ip and err are nil because of
ip, err := c.getFreeIP()
We know the result of the function getFreeIP() yet: nil nil
func (c *NodeController) getFreeIP() (*instance.IP, error) {
instanceAPI := instance.NewAPI(c.scwClient)
ipsList, err := instanceAPI.ListIPs(&instance.ListIPsRequest{}, scw.WithAllPages())
if err != nil {
return nil, err
}
for _, ip := range ipsList.IPs {
if ip.Server == nil && stringInSlice(ip.Address.String(), c.reservedIPs) {
return ip, nil
}
}
return nil, nil
}
Therefore the following statement is always false
ip.Server == nil && stringInSlice(ip.Address.String(), c.reservedIPs)
Could you please help me on addressing the issue?
I really appreciate any help you can provide.
Best regards Michele
Hello :wave: did you create a reserve IP in the Instance tab (and in the same zone?) ? Or in the loadbalancer tab?
Many thanks for your quick feedback! 👍 If you mean reserve IP using the flexible IP, yes, I previously created two flexible IPs (in the same zone) on the Instance tab and then assigned them on the configmap object. Now I tested the same thing also on the Loadbalancer tab, but I got the same warning unfortunately. :(
I see that node scw-t2-k8s-dev-pool-test-reservedip-c7c12842e8 is in fr-par-1 zone, but your configuration looks like to be nl-ams-1.
Yes it's correct, the autoscaled nodes are in the fr-par-1 zone, but the actual configmap configuration is
SCW_DEFAULT_ZONE: fr-par-1
SCW_DEFAULT_REGION: fr-par
I reported a different one just to avoid to write here the real configuration ; ). You got me 😆
Did I miss something?
For me it looks like the IP is not listed in the ListIPs call on the sepcified zone. Could you share the IP you are using that doesnt work?
Yep! I've reserved four IPs from Project Dashboard -> Compute -> Instances -> Flexible IPs
I updated the configmap.yaml on the the namespace scaleway-k8s-node-coffee in this way
kind: ConfigMap
metadata:
name: scaleway-k8s-node-coffee
namespace: scaleway-k8s-node-coffee
data:
REVERSE_IP_DOMAIN: ""
DATABASE_IDS: ""
REDIS_IDS: ""
RESERVED_IPS_POOL: "x.x.133.98,x.x.215.26,x.x.231.113,x.x.129.173"
SECURITY_GROUP_IDS: ""
RETRIES_NUMBER: "30"
After that I deployed a generic deployment with replica four that pointed to a node pool (with autoscaling on)
But just the node scw-t2-k8s-dev-pool-test-reservedip-4d5af0b438 got one reserved IP:
I0922 17:11:50.332423 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-0b91af7910
I0922 17:11:50.332487 1 reserved_ip.go:28] node scw-t2-k8s-dev-pool-test-reservedip-0b91af7910 was deleted, ignoring
I0922 17:11:52.199645 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-0da1024966
I0922 17:11:52.199701 1 reserved_ip.go:28] node scw-t2-k8s-dev-pool-test-reservedip-0da1024966 was deleted, ignoring
I0922 17:11:54.059480 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-5c98ebd5ae
I0922 17:11:54.059546 1 reserved_ip.go:28] node scw-t2-k8s-dev-pool-test-reservedip-5c98ebd5ae was deleted, ignoring
I0922 17:11:56.055618 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-f366f833a3
I0922 17:11:56.055666 1 reserved_ip.go:28] node scw-t2-k8s-dev-pool-test-reservedip-f366f833a3 was deleted, ignoring
I0922 17:14:03.477149 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-4d5af0b438
I0922 17:14:06.621712 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-4d5af0b438
W0922 17:14:06.839756 1 reserved_ip.go:44] node scw-t2-k8s-dev-pool-test-reservedip-4d5af0b438 already have a public IP
I0922 17:14:36.540580 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-f9fee9dded
W0922 17:14:36.806522 1 reserved_ip.go:60] no available reserved IPs for node scw-t2-k8s-dev-pool-test-reservedip-f9fee9dded
I0922 17:15:27.514276 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-81d2696f13
W0922 17:15:27.839708 1 reserved_ip.go:60] no available reserved IPs for node scw-t2-k8s-dev-pool-test-reservedip-81d2696f13
I0922 17:15:55.143447 1 reserved_ip.go:18] adding a reserved IP on node scw-t2-k8s-dev-pool-test-reservedip-ae012ae8b7
W0922 17:15:55.568523 1 reserved_ip.go:60] no available reserved IPs for node scw-t2-k8s-dev-pool-test-reservedip-ae012ae8b7
Do you see them with the cli scw instance ip list zone=<your-zone> ? And are them free, ie not attached to a server?
Yes, before deploying the pods, all IPs are free, after the deployment just one IP has been attached on the new server
curl -H 'X-Auth-Token: xxxxxxxxxxxxxxx' -H 'Content-Type: application/json' https://api.scaleway.com/instance/v1/zones/fr-par-1/ips | jq |egrep 'add|serv'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 998 100 998 0 0 4358 0 --:--:-- --:--:-- --:--:-- 4358
"address": "x.x.x.39",
"server": null,
"address": "x.x.x.26",
"server": null,
"address": "x.x.x.98",
"server": {
"name": "scw-t2-k8s-dev-pool-test-reservedip-ca4f97eae3"
"address": "x.x.x.102",
"server": null,
It's strange because it's always the same IP: x.x.x.98, is this IP special for some reason?
The IPs reserved were in the same zone of the nodes. Do you have any suggestions?
It's weird, Could you modify the to add a log when listing the IPs and run it to see the output of the ListIPs call?
Thanks @Sh4d1 , could you please point me to the line code to modify? Could it be the following https://github.com/Sh4d1/scaleway-k8s-node-coffee/blob/main/pkg/controllers/utils.go#L30 ?
Should we print ipsList ?
func (c *NodeController) getFreeIP() (*instance.IP, error) {
instanceAPI := instance.NewAPI(c.scwClient)
ipsList, err := instanceAPI.ListIPs(&instance.ListIPsRequest{}, scw.WithAllPages())
yep printing ipsList will help ! You can loop over the entries and print the object with the %+v formatting