OSError: [Errno 24] Too many open files
- ModBus TCP proxy version: 0.6.8
- Python version: 3.10.8
- Operating System: FedoraServer 36
Description
Here is my configuration:
# cat /etc/modbus-proxy.yaml
devices:
- modbus:
url: 192.168.160.202:502
timeout: 10
connection_time: 1.0
listen:
bind: 0:5502
- modbus:
url: 192.168.160.57:502
timeout: 10
connection_time: 0.0
listen:
bind: 0:5503
There is a device A behind 192.168.160.202:502 which is powered on and responses to the requests while the 192.168.160.57:502 is connected to the https://github.com/arendst/Tasmota which works as a Modbus TCP to RTU converter. There are two devices B and C connected to the serial port of Tasmota and B is powered on while C is not (it's intentional).
There are three Modbus to MQTT clients: M1, M2 and M3 which in a loop asks every 5 seconds for a data from devices A, B and C. It is expected that client which connects to device C will observe permanent connection error while it's not expected that it will leak resources (sockets).
What I Did
# systemctl status modbus-proxy.service
● modbus-proxy.service - ModBus TCP proxy
Loaded: loaded (/usr/lib/systemd/system/modbus-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-11-27 00:56:13 CET; 2 weeks 4 days ago
Docs: https://github.com/tiagocoutinho/modbus-proxy
Main PID: 797 (modbus-proxy)
Tasks: 3 (limit: 9114)
Memory: 33.8M
CPU: 1w 13h 30min 7.744s
CGroup: /system.slice/modbus-proxy.service
└─ 797 /usr/bin/python3 -s /usr/bin/modbus-proxy --config-file /etc/modbus-proxy.yaml
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: File "/usr/lib64/python3.10/socket.py", line 293, in accept
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: OSError: [Errno 24] Too many open files
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: 2022-12-15 11:09:20,161 ERROR asyncio: socket.accept() out of system resource
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: socket: <asyncio.TransportSocket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('0.0.0.0', 5502)>
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: Traceback (most recent call last):
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: File "/usr/lib64/python3.10/asyncio/selector_events.py", line 159, in _accept_connection
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: File "/usr/lib64/python3.10/socket.py", line 293, in accept
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: OSError: [Errno 24] Too many open files
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: 2022-12-15 11:09:20,162 ERROR asyncio: socket.accept() out of system resource
Dec 15 11:09:20 localhost.localdomain modbus-proxy[797]: socket: <asyncio.TransportSocket fd=6, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('0.0.0.0', 5502)>
# lsof -n -i -P | grep 797 | grep CLOSE_WAIT | head -n 8
modbus-pr 797 root 8u IPv4 5846232 0t0 TCP 192.168.160.160:5503->192.168.160.160:48876 (CLOSE_WAIT)
modbus-pr 797 root 9u IPv4 5842665 0t0 TCP 192.168.160.160:5503->192.168.160.160:34496 (CLOSE_WAIT)
modbus-pr 797 root 10u IPv4 5854117 0t0 TCP 192.168.160.160:5503->192.168.160.160:48014 (CLOSE_WAIT)
modbus-pr 797 root 11u IPv4 5832205 0t0 TCP 192.168.160.160:5503->192.168.160.160:44668 (CLOSE_WAIT)
modbus-pr 797 root 12u IPv4 5838077 0t0 TCP 192.168.160.160:5503->192.168.160.160:54076 (CLOSE_WAIT)
modbus-pr 797 root 13u IPv4 5849528 0t0 TCP 192.168.160.160:5503->192.168.160.160:56412 (CLOSE_WAIT)
modbus-pr 797 root 14u IPv4 5850827 0t0 TCP 192.168.160.160:5503->192.168.160.160:38014 (CLOSE_WAIT)
modbus-pr 797 root 15u IPv4 5860437 0t0 TCP 192.168.160.160:5503->192.168.160.160:44496 (CLOSE_WAIT)
Hi Sounds like typical linux system Issue not specific to this project https://easyengine.io/tutorials/linux/increase-open-files-limit/
Issue not specific to this project
Actually it's quite opposite. Please have a look at the dump of lsof for modbus-proxy and you will see that the process keeps not closed sockets in the CLOSE_WAIT state, whch basically mean that the socket received FIN but the operating system is waiting for the process to locally close the connection. See: TCP socket states for CLOSE-WAIT.
Sounds like typical...
Indeed, it sounds like a typical socket leak, where suggestion for increasing the number of maximum opened socket connections will only postpone the problem rather then fix it at the root cause.
Have the same issue. 2 clients only :(