MTProxy
MTProxy copied to clipboard
assert (!(p & 0xffff0000)); /*where p is pid*/
Hello, frends!
On my hosting when the application starts, it ends up on
assert (! (p & 0xffff0000));
in
common/pid.c:32
if I comment on this line the application is launched, but all system connection calls are terminated with the EINPROGRESS return code and the process consumes ~ 70% of the CPU
Could you please clarify what this check makes p & 0xffff0000
Could you clarify what this check makes p & 0xffff0000 and what rights in the system need to be checked, because on another hosting, everything works without any noticing.
And by the way, yes, the process pid> 110000, if this is important, but apparently important.
=== iqdoctor @ raccoon4x: ~ / MTProxy $ grep connect st | grep 43 connect (43, {sa_family = AF_INET, sin_port = htons (8888), sin_addr = inet_addr ("91.108.4.135")}, 16) = -1 EINPROGRESS (Operation now in progress) connect (43, {sa_family = AF_INET, sin_port = htons (80), sin_addr = inet_addr ("149.154.162.39")}, 16) = -1 EINPROGRESS (Operation now in progress) connect (43, {sa_family = AF_INET, sin_port = htons (8888), sin_addr = inet_addr ("91.108.4.175")}, 16) = -1 EINPROGRESS (Operation now in progress)
Platform - Azure.
So, I've went through the source code and realized that: pid_t on most platforms is int == 4 bytes.
struct process_id {
unsigned ip;
short port;
unsigned short pid;
int utime;
};
As you can see, PID.pid is short == 2 bytes.
Doing assert (! (p & 0xffff0000)) ensures that PID fits in those 2 bytes.
During handshakes process_id is passed around, and it's used to guard from starting handshake with one pid and finishing it with other one.
However, I don't think this would cause much trouble for mtproxies, since they don't face such a huge load as Telegram's mtfronts.
Also, from what I see, mtproxy only forks as much times as -M arg says, and that happens in pre_init.
So, the only problem I can see is that PIDs somehow change rapidly during those fork's, and the pid of one worker is the same as pid of another worker if you only consider last 2 bytes.
That means that 65536 processes should be spawned in between of those forks, which I consider very unlikely.
So, I think that this check could be omitted, and maybe pids after all forks should be checked that for any x and y in pids (x & 0xffff) != (y & 0xffff) where x != y.
Temporary solution is to add kernel.pid_max=65535 to /etc/sysctl.conf, that should be fine for most setups.
In Ubuntu Focal kernel.pid_max is raised to 4194304, this needs to be fixed ASAP.
In Ubuntu Focal kernel.pid_max is raised to 4194304, this needs to be fixed ASAP.
Yes, I have noticed that mtproto-proxy fails to launch but if you restart the OS, it launches without problems. Apparently the app receives a high PID when the OS has more than two days running.
What puzzles me is why does the service break, usually before 72h.
Is there any updates? Or the project was abandoned.
Is there any updates? Or the project was abandoned.
Apparently, yes. I suggest just comment out the line with assertion
Is there any updates? Or the project was abandoned.
Apparently, yes. I suggest just comment out the line with assertion
It wouldn't be enough to just remove assert. Internal npid_t structure should be changed also.
One can check my simple fix for this issue - it works perfectly at my setup (Ubuntu 20.04.4 LTS in DO Cloud). https://github.com/TelegramMessenger/MTProxy/pull/486
One can check my simple fix for this issue - it works perfectly at my setup (Ubuntu 20.04.4 LTS in DO Cloud). #486
Your fix makes mtproxy crazy, without clients it consumes 70% of core and actually doesnt work // CentOS 8 Stream
One can check my simple fix for this issue - it works perfectly at my setup (Ubuntu 20.04.4 LTS in DO Cloud). #486
Your fix makes mtproxy crazy, without clients it consumes 70% of core and actually doesnt work // CentOS 8 Stream
Could you please provide some logs and timings within the system setup to be able check your problem. I don't this behaviour on my setup - so it's hard to understand crazyness reason.
@heni btw the same for me - mtproxy is consuming much cpu and it doesn't work at all - proxy is unavailable, mtproxy doesn't bind to specified local port. No errors in the log
... - mtproxy is consuming much cpu ...
For reducing cpu time cusuming, you can try my fork.
I've changed some logic about epoll timeout, so that if you specific timeout to -1 and using multithread mode, epoll would never timeout until an event be caught.