service-capacity-modeling
service-capacity-modeling copied to clipboard
Right now most models are split into two parts: 1. Try to determine the resources you need for a desire using math on the desire (CPU, RAM, Disk, Network, etc...
Just to add a unit test of the current SLOs we are offering for CRDB and the neccesary working set.
In our current logic (https://github.com/Netflix-Skunkworks/service-capacity-modeling/blob/main/service_capacity_modeling/models/org/netflix/key_value.py#L85), we scale the C* cluster by a factor of `1 - estimated_kv_cache_hit_rate`, where `estimated_kv_cache_hit_rate` is configurable (default 0.8). Per a previous convo with @jolynch and...
I'm working on summarizing the cost, cpu, disk (local & attached) for both regional and zonal clusters. I want there to be more consistency in the way repetition is represented....
Right now we just make a recommendation like "12 m5d.2xlarge" but for software that can autoscale (stateless java apps, elasticsearch etc ...) it would be nice if we could return...
Greetings, I was asked to add a new model to your capacity planner. Are there general directions or documentation on what it takes to add a new model? I see...