bug: failed to establish port mapping: goupnp: error performing SOAP HTTP request after LAN (upnp) router restart
Checklist
- [X] This is a bug report, not a question. Ask questions on discuss.ipfs.io.
- [X] I have searched on the issue tracker for my bug.
- [X] I am running the latest kubo version or have an issue updating.
Installation method
built from source
Version
Kubo version: 0.20.0-dev-1457b4fd4
Repo version: 13
System version: arm64/darwin
Golang version: go1.19
Config
{
"API": {
"HTTPHeaders": {}
},
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Announce": [],
"AppendAnnounce": [],
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001",
"/ip4/0.0.0.0/udp/4001/quic",
"/ip4/0.0.0.0/udp/4001/quic-v1",
"/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
"/ip6/::/udp/4001/quic",
"/ip6/::/udp/4001/quic-v1",
"/ip6/::/udp/4001/quic-v1/webtransport"
]
},
"AutoNAT": {},
"Bootstrap": [
"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"
],
"DNS": {
"Resolvers": {}
},
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "10GB"
},
"Discovery": {
"MDNS": {
"Enabled": true
}
},
"Experimental": {
"AcceleratedDHTClient": false,
"FilestoreEnabled": false,
"GraphsyncEnabled": false,
"Libp2pStreamMounting": false,
"P2pHttpProxy": false,
"StrategicProviding": false,
"UrlstoreEnabled": false
},
"Gateway": {
"APICommands": [],
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range",
"User-Agent"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"*"
]
},
"NoDNSLink": false,
"NoFetch": false,
"PathPrefixes": [],
"PublicGateways": null,
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "12D3KooWQF6Q3i1QkziJQ9mkNNcyFD8GPQz6R6oEvT75wgsVXm4v"
},
"Internal": {},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Migration": {
"DownloadSources": [],
"Keep": ""
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Peering": {
"Peers": null
},
"Pinning": {
"RemoteServices": {}
},
"Plugins": {
"Plugins": null
},
"Provider": {
"Strategy": ""
},
"Pubsub": {
"DisableSigning": false,
"Router": ""
},
"Reprovider": {},
"Routing": {
"Methods": null,
"Routers": null
},
"Swarm": {
"AddrFilters": null,
"ConnMgr": {},
"DisableBandwidthMetrics": false,
"DisableNatPortMap": false,
"RelayClient": {},
"RelayService": {},
"ResourceMgr": {},
"Transports": {
"Multiplexers": {},
"Network": {},
"Security": {}
}
}
}
Description
Ran my kubo daemon with GOLOG_LOG_LEVEL="warn,webtransport=debug,reuseport-transport=warn,net/identify=error" cmd/ipfs/ipfs daemon 2>&1 | tee kubo-log.txt because I was debugging other stuff. I left it running since friday, and my laptop slept over the weekend. When opening it back up, I have a spamming of these errors:
2023-03-25T04:06:08.652-0700 WARN nat nat/nat.go:179 failed to establish port mapping: goupnp: error performing SOAP HTTP request: Post "http://192.168.1.1:36171/ctl/IPConn": dial tcp 192.168.1.1:36171: connect: connection refused
full log at https://bafybeiaxuasctnq3sgwwgtgdk6q2rlvuwe4s37bfzin7z7cnt6byesdpvq.ipfs.w3s.link/bafybeig6sodjrftg5eetjzlcvm7nx4qcwgwbe72n4ps6nuebdemj5aa62u
Repro?
Not yet.. i'll add to this if it occurs after a long sleep again
Killing daemon CTRL+C and restarting it does not show me these errors.
Other Details
My public IP (dynamic IP from ISP) is different than what it was on Friday
Clear bugs (to me)
- Spamming of port mapping attempts even though it has failed hundreds of times. There are 4-6 requests PER MINUTE. This is too much. fix this. exponential backoff and circuit breaker required (unless this log is outside of those functions.. but I doubt it.. if it is, add more log info)
- It doesn't seem like the necessary logic to successfully establish port mapping (as is done during daemon start) is done after these failures.. which SHOULD be done. If I can kill kubo daemon and start again immediately without seeing these errors again, you should be able to handle this case without me.
Related issues
Couldn't find any in this repo, but I found the following in lotus, which I don't think actually addresses the root issue:
- https://github.com/filecoin-project/lotus/issues/2571
- https://github.com/filecoin-project/lotus/issues/4083
- https://github.com/filecoin-project/lotus/pull/7204
Potentially related issues
- https://github.com/ipfs/kubo/issues/3320
Note that I related #3320 because this type of behavior could easily kill routers... but it did not kill mine.
How many other calls are we sending without exponential backoff and circuit breaking on failures?
Do we not have exponential backoff and/or circuit breaking? Why is nat/nat.go:179 called SO many times per minute? It looks like we have some throttling, https://github.com/ipfs/kubo/blob/88d431c8129aeef2ea5076336412cb54161342ce/core/node/libp2p/nat.go#L20, but that appears to be on an individual basis and that there are multiple consumers/deps trying to use AutoNatService.
I've stumbled uppon the same error while testing Kubo v0.35.0-rc1 in my LAN with GOLOG_LOG_LEVEL="error,autotls=info,nat=info
2025-05-09T02:15:16.920Z INFO autotls.maintenance [..]
[...]
2025-05-09T02:17:27.986Z WARN nat nat/nat.go:244 failed to establish port mapping: goupnp: error performing SOAP HTTP request: Post "http://192.168.50.1:56583/ctl/IPConn": dial tcp 192.168.50.1:56583: connect: connection refused
2025-05-09T02:17:28.000Z WARN nat nat/nat.go:244 failed to establish port mapping: goupnp: error performing SOAP HTTP request: Post "http://192.168.50.1:56583/ctl/IPConn": dial tcp 192.168.50.1:56583: connect: connection refused
2025-05-09T13:40:48.039Z WARN nat nat/nat.go:244 failed to establish port mapping: goupnp: error performing SOAP HTTP request: Post "http://192.168.1.1:56583/ctl/IPConn": dial tcp 192.168.1.1:56583: connect: connection refused
2025-05-09T13:40:48.057Z WARN nat nat/nat.go:244 failed to establish port mapping: goupnp: error performing SOAP HTTP request: Post "http://192.168.1.1:56583/ctl/IPConn": dial tcp 192.168.1.1:56583: connect: connection refused
2025-05-09T13:41:08.071Z WARN nat nat/nat.go:244 failed to establish port mapping: goupnp: error performing SOAP HTTP request: Post "http://192.168.1.1:56583/ctl/IPConn": dial tcp 192.168.1.1:56583: connect: connection refused
[...] many log lines
fwiw I checked my LAN router logs, and it had entries about other ports expiring around that time, but not 56583 (could be it expired earlier, i just dont have logs that far back). We also had it restarted bit later:
May 9 00:20:16 miniupnpd[24912]: remove port mapping 61379 UDP because it has expired
May 9 00:21:23 miniupnpd[24912]: remove port mapping 41641 TCP because it has expired
May 9 00:21:23 miniupnpd[24912]: remove port mapping 41641 UDP because it has expired
[...]
May 9 04:17:17 miniupnpd[24912]: shutting down MiniUPnPd
[...]
May 9 04:17:17 miniupnpd[19227]: Listening for NAT-PMP/PCP traffic on port 5351
When restarting Kubo, the problem goes away – goupnp works as expected and publicly diallable port is obtained again just fine.
How to reproduce
TLDR: Run with GOLOG_LOG_LEVEL="error,nat=info and restart router in LAN that is responsible for port forwarding.
I confirmed initial port forwarding works as expected, I also see it in logs of miniupnpd running on my router:
May 9 04:17:17 miniupnpd[19227]: HTTP listening on port 49682
Then I restarted it (the router running miniupnpd).
The router model I have does not persist miniupnpd mappings across restarts, and as soon I restarted, it "forgot" all mappings and Kubo lost the public addr and started logging error every ~20 seconds:
2025-05-09T14:09:19.965Z WARN nat nat/nat.go:244 failed to establish port mapping: goupnp: error performing SOAP HTTP request: Post "http://192.168.1.1:49682/ctl/IPConn": dial tcp 192.168.1.1:49682: connect: connection refused
Why this is bad
Losing forwarded port and not being able to recover from that state is a problem.
When Kubo is in this state, it is no longer public diallable, likely leading to retrieval failures (if hole punching fails etc).
I checked ipfs id from remote node, and my LAN node no longer has public addr, it is behind a relay (diallable only via /p2p-circuit addresses)
I'm raising this to P1 – this is a bug that impacts every desktop node that depends on port mapping done by go-libp2p/nat|goupnp, and this failure to re-request mapping is likely causing many self-hosted providers to run with degraded connectivity.
This is tracked in https://github.com/libp2p/go-libp2p/issues/3224
I was able to reproduce this with just go-libp2p (https://github.com/libp2p/go-libp2p/issues/3224#issuecomment-2866844723), we need to wait for that upstream issue to be resolved.
Fix in https://github.com/libp2p/go-libp2p/pull/3367, hoping to include in 0.38 if stars align.