mhddos_proxy_releases
mhddos_proxy_releases copied to clipboard
What about network adjustments?
Hello. I use this soft on my home PC and I need a bit more network flexibility for it. At first - I need to specify interface or IP address to bound for it for separate traffic. As variant or in addition - set specify DSCP value for parsing in QoS management. Next one - what about some rate or perfomance limiting? Maybe even when soft is running. I use it also on home PC of my friend and it cause extra load for a router of his network.
Thanks for the ideas, we would try and take a look into it, however I won't give a specific ETA.
--bind
option is coming in the next release allowing to specify local address(es) to use.
Other two are wontfix for now.
Bind somehow don't work for me. I try to specify local address and interface name but both wouldn't work. Distress with specify interface works correctly and traceroute and same tools with same source address works as expected but not for mhddos_proxy. I use debian buster based distributive with network manager and I create pppoe interface special for this purposes. What I do wrong?
Hi, sorry for long response. There seems to be 2 issues.
- One of the methods (BYPASS) was not updated to support this feature, and we've been using it heavily lately. This will be fixed in the next release.
- There seems to be certain nuisance with setting local address vs interface for the socket. What I've observed is packets with correct source_ip were going through incorrect interface, due to the routing table configuration (they were going to the VPN interface which was set as a default gateway by openvpn). Unfortunately, python makes it very hard to specify socket interface, but allows to specify local_addr, so we decided to keep it as is. If that's your case, you'll have to configure routing table in such a way that these packets (with correct source_ip) are going through the correct interface.
Thanks for reply. Guess it caused by BYPASS method because I use traceroute and mtr with specify source address and them works correctly with it and I can see it. Wait for next release, thanks for your attention.
In addition - what about some flag to apply bind facility only for direct connections? I mean bind direct connections to selected addresses and don't affect to connections trough proxies. It's really helpful with load-balancing and grant availability to use even VPN with weak bandwidth and don't loose overall performance with proxies
Issue still actual on v61. I can mix attacks trough proxies directly trough internet of my hosts and direct attacks trough one or several shared VPN hub with bind and bind-to-direct facilities. Now I use distress separated for direct attacks and mhddos_proxy for proxies. Any updates for this issue?
--bind
should be fixed by now...As for the flag - the binding feature is already pretty niche, and another option for the specific use-case - we are just not sure it's worth adding.
I just check it again and nothing changes. I use host with several interfaces and several default routes. Distress works with several default routes in main table but don't work with separate route table, mhddos_proxy won't work with any case but mtr, traceroute and same system software works as it excepted in both cases. Here is my config: For interfaces I use:
eno1v3660: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.240.64.16 netmask 255.255.255.0 broadcast 10.240.64.255
inet6 (hidden global IP) prefixlen 64 scopeid 0x0<global>
inet6 fe80::d9e9:f8bd:8459:b98e prefixlen 64 scopeid 0x20<link>
ether (hidden MAC) txqueuelen 1000 (Ethernet)
RX packets 4718735235 bytes 566476462993 (527.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10276078097 bytes 10731149254230 (9.7 TiB)
TX errors 0 dropped 103238 overruns 0 carrier 0 collisions 0
eno1v3661: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.120.9 netmask 255.255.255.248 broadcast 172.16.120.15
inet6 fe80::670d:84e1:a756:d0bb prefixlen 64 scopeid 0x20<link>
ether (hidden MAC) txqueuelen 1000 (Ethernet)
RX packets 154 bytes 12176 (11.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6431 bytes 276695 (270.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
For config with routes at different tables:
owner@8000-host-core:~$ ip route list
default via 10.240.64.64 dev eno1v3660 proto dhcp metric 400
10.240.64.0/24 dev eno1v3660 proto kernel scope link src 10.240.64.16 metric 400
169.254.0.0/16 dev eno1v3660 scope link metric 1000
owner@8000-host-core:~$ ip route list table 16
default via 172.16.120.10 dev eno1v3661 proto dhcp metric 20401
172.16.120.8/29 dev eno1v3661 proto kernel scope link src 172.16.120.9 metric 401
For config with several default routes in main table:
owner@8000-host-core:~$ ip route list
default via 10.240.64.64 dev eno1v3660 proto dhcp metric 400
default via 172.16.120.10 dev eno1v3661 proto dhcp metric 1200
10.240.64.0/24 dev eno1v3660 proto kernel scope link src 10.240.64.16 metric 400
169.254.0.0/16 dev eno1v3660 scope link metric 1000
172.16.120.8/29 dev eno1v3661 proto kernel scope link src 172.16.120.9 metric 1200
owner@8000-host-core:~$ ip route list table 16
owner@8000-host-core:~$
Am I doing something wrong? Can you show how it works for you?
Understood. I thought the problem was with BYPASS. I am not using this feature :D Will try to take a look when I have time, but cannot promise much, it's a low priority to us for now. Have you tried to debug packets with wireshark and see what's the difference between mhddos and traceroute on IP level?
Have you tried to debug packets with wireshark and see what's the difference between mhddos and traceroute on IP level?
I just test it again with traceroute (it ask for interface to bind, not for address) and for mhddos and dump traffic at gateway point. I see a lot of packets with TCP flags RST, ACK in dump with mhddos_proxy. How can I send those dump to you as private?
You can send it to [email protected]
Won'tfix