containers-roadmap
containers-roadmap copied to clipboard
[Fargate] [request]: Fargate sysctls support
Tell us about your request
Add systemControls
support for Fargate.
Which service(s) is this request for? ECS, Fargate
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? I think sysctls can be useful in general. They cannot be applied within the OS itself due to lack of permissions, and Fargate doesn't allow privileged mode nor adding Linux capabilities.
I'd like to tune the net.ipv4.tcp_keepalive_time
sysctl for my Fargate containers. The reason is a bit unusual, but here goes:
- I want to set the TCP keepalive time to 290 seconds. The Linux default is 7200 seconds.
- I'm behind a Cisco Meraki appliance that has a fixed 300-second NAT timeout for TCP connections (confirmed with Cisco support).
- The Fargate task runs HAProxy, which can only disable/enable TCP keepalives; it cannot tune the parameters. (Making a feature request shortly.)
- I'd like to have the server do the frequent TCP keepalives, instead of pushing that requirement onto every client.
Are you currently working around this issue? Two ways: Filing a request with HAProxy; and adding client-initiated frequent TCP keepalives where possible, which sadly doesn't cover all my cases.
Can we disable TCP keepalives in fargate tasks?
Can we disable TCP keepalives in fargate tasks?
Probably not. Note that TCP sockets don't have keep-alive enabled by default. The application will have to make an explicit call to setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, ...
to enable it (followed by additional setsockopt calls to modify the TCP keep-alive parameters).
I'm curious what the use case is for explicitly disabling TCP keep-alives - what is the scenario?
I would also love to see Fargate support sysctl settings, albeit for a different use case.
+1 ❤️
Yes this is a huge blocker for me as I need to set net.somaxcons to a higher number and a few other sysctl settings, ecs allows you to do this but not fargate...
++ 👍
Another vote for me. Maxing out the connections at 128 means I likely have to abandon Fargate and go back to ECS, which does let us adjust sysctl settings.
We would like to set kernel.perf_event_paranoid
to be able to collect CPU traces using perf
Our use case related to https://github.com/SonarSource/docker-sonarqube/issues/282 and the ability to set vm.max_map_count
(the issue provide a workaround).
+1 We would really like to see this enabled in Fargate too!
We also need this for fargate tasks behind an AWS NAT gateway for requests that take longer than 5 minutes: https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-troubleshooting.html#nat-gateway-troubleshooting-timeout
+1 We would love to see this as we use Fargate at TableCheck Japan. Our specific requirement is to be able to set fs.inotify.max_user_watches
to a value higher than 8192
.
Would be very useful to set net.ipv4.ip_forward=1
and create a transparent-proxy with Fargate!
Our asks are: net.ipv4.ip_local_port_range net.ipv4.ip_local_reserved_ports /proc/sys/fs/nr_open /proc/sys/fs/file-max
Our usecases are:
/proc/sys/vm/overcommit_memory vm.max_map_count net.core.rmem_max net.core.wmem_max fs.file-max
Lack of support for the features above is a hard blocker for our team to use Fargate.
We need to set net.core.somaxconn
value to higher than 128. Still no possible, right?
Configuring Keepalives is an RDS best practice for fast failover: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.BestPractices.html
We use Swoole, and Swoole recommends adjust sysctls change following:
- net.ipv4.tcp_mem
- net.ipv4.tcp_wmem
- net.ipv4.tcp_rmem
- net.core.wmem_default
- net.core.rmem_default
- net.core.rmem_max
- net.core.wmem_max
- net.ipv4.tcp_syncookies
- net.ipv4.tcp_max_syn_backlog
- net.ipv4.tcp_synack_retries
- net.ipv4.tcp_syn_retries
- net.ipv4.tcp_fin_timeout
- net.ipv4.tcp_keepalive_time
- net.ipv4.tcp_tw_reuse
- net.ipv4.tcp_tw_recycle
- net.ipv4.ip_local_port_range
- net.ipv4.tcp_max_tw_buckets
- net.ipv4.route.max_size
I would like to set the ephemeral port range in Fargate
We have to change the TCP keepalive config. net.ipv4.tcp_keepalive_time net.ipv4.tcp_keepalive_intvl net.ipv4.tcp_keepalive_probes
vm.max_map_count
for Elasticsearch (#1452).
We need to be able to increase the following for logstash UDP inputs:
- net.core.rmem_default
- net.core.rmem_max
I really hope this feature is coming soon...
We need the following values to tune Shadowsocks.
net.ipv4.tcp_congestion_control
net.core.rmem_max
net.core.wmem_max
net.core.netdev_max_backlog
net.ipv4.tcp_fastopen
need to set vm.max_map_count for Ruby Sidekiq running with 24GB RAM in Fargate to avoid the ruby garbage collector [BUG]ging out in either allocating or freeing heap pages.
We also have to change the TCP keepalive config:
net.ipv4.tcp_keepalive_time net.ipv4.tcp_keepalive_intvl net.ipv4.tcp_keepalive_probes
When can we expect this feature to be available?
Is anyone here?
Hi everyone, apologies for the quietness on this issue. This is something we're actively working on (we've updated the status to "Coming Soon" rather than "Proposed"). In the first release we'll make it possible to make changes to the following sysctl settings: net., fs.mqueue., kernel.msgmax, kernel.msgmnb, kernel.msgmni, kernel.sem, kernel.shmall, kernel.shmmax, kernel.shmmni, and kernel.shm_rmid_forced.
Hello,
Any aprox date for this?
Thank you Javier Torres
@javier-torres - cant share dates here, but you should expect it to launch within the next few quarters.