kubo
kubo copied to clipboard
[META] Private Addresses In Public
You may have noticed that IPFS currently announces private addresses on the public DHT. While IPFS does support local discovery through MDNS, (a) MDNS isn't always perfectly reliable and (b) when trying to find a specific peer, we don't actually query MDNS.
However, announcing private addresses to the public network is far from an ideal solution. There have been many discussions about this issue (#1226, #1173, #4343, #5418, #5511) and some deeper discussions in https://github.com/libp2p/go-libp2p/issues/436.
Problems include:
- Unnecessary network activity. Local dials are usually pretty cheap but they're still a waste.
- Triggers port scanning warnings (https://github.com/ipfs/go-ipfs/issues/4343). This is actually a rather complicated issue that includes other issues like dialing too many nodes (should be fixed by incoming DHT improvements), accumulating too many observed addresses (should be fixed by signed peer routing records), and just the general "IPFS is a p2p application that needs to dial a lot of nodes".
However, we have good news!
- Libp2p is introducing signed peer routing records. This means we're getting atomic peer routing records that can't be spoofed, modified, extended, etc. This will land in 0.6.0.
- Once that lands, libp2p will add a record "scope" (https://github.com/libp2p/go-libp2p/issues/793).
- Go-ipfs 0.5.0 ~should (unless something changes) ship~ ships two DHTs. A public one and a private one: https://github.com/libp2p/go-libp2p/issues/803.
Once all three of those land, we should be able to restrict private addresses to the private DHT.
I hate to bump this, but it's quite a low hanging fruit and causing many headaches - as it requires manual interventions on every installed node to not push private IPs to the DHT and also not try to connect to private IPs received from the public DHT.
Is there a chance to get this on the roadmap for 0.9? :)
bump. lots of people and orgs are hosting ipfs on hetzner.
Bump. Just got a warning from Hetzner...
Bump. Just got a warning from Hetzner...
This is happening when you are running with server profile? Is it a similar issue like https://github.com/ipfs/go-ipfs/issues/5418#issuecomment-423743616?
Bump. Just got a warning from Hetzner...
This is happening when you are running with server profile? Is it a similar issue like #5418 (comment)?
I thought I did, but they removed my warning after config patching it, so now I guess the issue was on my side - sorry for the unnecessary bump
Bump. Just got a warning from Hetzner...
This is happening when you are running with server profile? Is it a similar issue like #5418 (comment)?
I thought I did, but they removed my warning after config patching it, so now I guess the issue was on my side - sorry for the unnecessary bump
Can you give more details on your setup ? Are you using a dedicated server ? Or cloud ?
I'm trying to install ipfs on a hetzner cloud server, I got a warning on my first try.
I just got my server locked by hetzner due to this 😭
Hetzner's network team gets very unpleasant about these.
Maybe IPFS could check the first non-loopback interface to see if that address is RFC1918 and then make local peer discovery opt-in at that point?
Update: I got ejected from Hetzner again; server profile does not fix this issue.
(Why? Because it doesn't ignore the CGNAT range and apparently people run IPFS on nodes with tailscale installed)
I'm dropping IPFS from my stack for the time being.
@acuteaura IPFS does not rate limit connections at all, and this repo has not acknowledged deep-running issues like this for multiple years, you're SOL, I heavily recommend permanently dropping IPFS from your stack.
@ShadowJonathan do you know if there is any good alternative? Because I'm running graph node with subgraph, so I'm looking if I can use different approach which doesn't involve IPFS node ruining...
@sharp2448 go-ipfs v0.13 will have a ResourceManager so rate limiting will be possible pretty soon.
@Winterhuman Ok, sounds good. Thanks for info
I have a node with such configuration^
"Swarm": {
"AddrFilters": [
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/100.64.0.0/ipcidr/10",
"/ip4/169.254.0.0/ipcidr/16",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.0.0.0/ipcidr/24",
"/ip4/192.0.2.0/ipcidr/24",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/198.18.0.0/ipcidr/15",
"/ip4/198.51.100.0/ipcidr/24",
"/ip4/203.0.113.0/ipcidr/24",
"/ip4/240.0.0.0/ipcidr/4",
"/ip6/100::/ipcidr/64",
"/ip6/2001:2::/ipcidr/48",
"/ip6/2001:db8::/ipcidr/32",
"/ip6/fc00::/ipcidr/7",
"/ip6/fe80::/ipcidr/10"
But hosting provider Hetzner still complains about portscan of private networks. ipfs config was initialized with --profile=server
@varuzam this is a different issue, please comment on https://github.com/ipfs/kubo/issues/8585
I am not able to reproduce the issue you are seeing, if you can help do so please comment on https://github.com/ipfs/kubo/issues/8585.
same here, my server at hetzner has been block because IPFS is send from port 4001 to many private ip range I never set in the config... what's the settings to stop this chaos?
@ROBERT-MCDOWELL this is a different issue, please comment on https://github.com/ipfs/kubo/issues/8585
Bumping. After updating to the latest version (have been running a pretty old version for a while) I am getting abuse reports from my dedicated server provider.
I have applied the server
profile using:
$ ipfs config profile apply
This is my Swarm.AddrFilters
configuration key:
$ cat /srv/data/live/ipfs/config | jq .Swarm.AddrFilters
[
"/ip4/10.0.0.0/ipcidr/8",
"/ip4/100.64.0.0/ipcidr/10",
"/ip4/169.254.0.0/ipcidr/16",
"/ip4/172.16.0.0/ipcidr/12",
"/ip4/192.0.0.0/ipcidr/24",
"/ip4/192.0.0.0/ipcidr/29",
"/ip4/192.0.0.8/ipcidr/32",
"/ip4/192.0.0.170/ipcidr/32",
"/ip4/192.0.0.171/ipcidr/32",
"/ip4/192.0.2.0/ipcidr/24",
"/ip4/192.168.0.0/ipcidr/16",
"/ip4/198.18.0.0/ipcidr/15",
"/ip4/198.51.100.0/ipcidr/24",
"/ip4/203.0.113.0/ipcidr/24",
"/ip4/240.0.0.0/ipcidr/4",
"/ip6/100::/ipcidr/64",
"/ip6/2001:2::/ipcidr/48",
"/ip6/2001:db8::/ipcidr/32",
"/ip6/fc00::/ipcidr/7",
"/ip6/fe80::/ipcidr/10"
]
The ipfs
daemon seems to simply ignore these. I also have the $IPFS_PROFILE
environment variable set to "server
". None of this works even though at least the envvar used to be enough on its own.
I do not understand why it seems impossible to instruct the ipfs
daemon to just ignore certain addresses or classes of addresses. Private DHT or no, simply blacklisting certain groups of IPs is a simple, effective solution that used to work well.
As it stands, it is now much, much harder to run the ipfs
daemon for a service I had been running for years. This is very disappointing and a serious problem for anyone running the daemon, as evidenced by the plethora of issues linked in this very ticket.
duplicate issue.... https://github.com/ipfs/kubo/issues/8585