kubernetes.core icon indicating copy to clipboard operation
kubernetes.core copied to clipboard

k8s_scale possibility to scale in parallel

Open xSuiteHaroonButt opened this issue 3 years ago • 3 comments

SUMMARY

scaling over multiple items could be done in parallel

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s_scale

ADDITIONAL INFORMATION

This feature could be used, when there is a large number of Deployments/Statefulsets or in this case results, that can/need to be scaled up in parallel, because of dependecies/HealthChecks in conjuntion with wait: yes to confirm a properly running state. Currently scaling is done in serial, by default, which means we need to be aware of dependencies and need to loop in a particular order to ensure healthchecks are meet and running states can be confirmed. This doesn't work really well in automated cases, where we can get the information automated with k8s_info

Currently the only option to scale up a application with multiple dependencies is with the wait: no flag, which doesn't allow for a check of ready_replicas.

This could also be resolved it support for async would be added.


       - name: get demo deployments
         kubernetes.core.k8s_info:
           kind: Deployment
           namespace: demo
           label_selectors:
             - app=demo
         register: demo_deployments

       - name: scale demo application down                                                                                           
         kubernetes.core.k8s_scale:  
           parallel: yes                                                                                                      
           kind: Deployment                                                                                                           
           namespace: demo                                                                                                              
           wait: yes                                                                                                                       
           replicas: 0                                                                                                                      
           label_selectors:                                                                                                                
             - app=demo                                                                                                                    

       - name: scale demo application up                                                                                           
         kubernetes.core.k8s_scale:              
           parallel: yes                                                                                          
           kind: "{{ item.kind }}"
           namespace: demo                                                                                                              
           wait: yes                                                                                                                       
           replicas: "{{ item.spec.replica }}"                                                                                                                      
           name: "{{ item.metadata.name }}"                                                                                                                   
         loop: "{{ demo_deployments.resources }}" 

This is the time result with a demo application with 38 deployment scaled from 1 to 0 and back from 0 to 1

scale demo down  --------------- 276.16s
scale demo up ------------------- 95.74s
Gathering Facts ------------------ 2.40s
get demo deployments ------------- 2.23s

The time to scale the deployment by hand on the cluster from 1 to 0, wait for everything to be 0 and back to 1 is actually only about 16 sec

$time kubectl scale deployment -n demo -l app=demo --replicas=0 ; date
real    0m6.015s
user    0m0.219s
sys     0m0.056s
Tue 22 Mar 2022 11:22:00 AM UTC

$time kubectl scale deployment -n demo -l app=demo --replicas=1 ; date
real    0m6.072s
user    0m0.247s
sys     0m0.073s
Tue 22 Mar 2022 11:22:16 AM UTC

scaling with the command module

check if demo is scaled down ------------------------------------ 25.59s
scaling up using command --------------------------------------- 20.58s
scaling down using command ----------------------------------- 6.54s
Gathering Facts --------------------------------------------------- 3.40s
get current replicas demo ---------------------------------------- 2.02s

Thanks for your consideration.

xSuiteHaroonButt avatar Mar 22 '22 11:03 xSuiteHaroonButt

Hi @haroonb ,

we have exactly the same problem with scaling down our application being very slow due a similarly large number of Deployments/StatefulSets. Did you stick to k8s_scale or did you resort to kubectl scale? Or did you even find a better workaround in the meantime?

macster84 avatar Jul 07 '23 09:07 macster84

Hello @macster84 , I don't use the scale module to scale my application down. I use this combination, until the module can scale in parallel.

        - name: get current replicas demo
          kubernetes.core.k8s_info:
            kind: Deployment
            namespace: demo
            label_selectors:
              - app=demo
          register: demo_deployments

        - name: scaling demo down using command
          ansible.builtin.command: kubectl scale deployment -n demo --all --replicas=0

        - name: scaling up using command
          no_log: yes
          ansible.builtin.command: |
                    kubectl scale {{ item.kind }} -n demo {{ item.metadata.name }} --replicas={{ item.spec.replicas }}
          loop: "{{ demo_deployments.resources }}"		  

xSuiteHaroonButt avatar Jul 10 '23 06:07 xSuiteHaroonButt

Thanks a lot for sharing @haroonb . Very helpful indeed.

macster84 avatar Jul 19 '23 07:07 macster84