IPFS has multiple memory leaks
Checklist
- [X] This is a bug report, not a question. Ask questions on discuss.ipfs.tech.
- [X] I have searched on the issue tracker for my bug.
- [X] I am running the latest kubo version or have an issue updating.
Installation method
built from source
Version
v17.0
Config
{
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Credentials": [
"true"
],
"Access-Control-Allow-Methods": [
"PUT",
"POST",
"GET",
"DELETE",
"OPTIONS"
],
"Access-Control-Allow-Origin": [
"*"
]
}
},
"Addresses": {
"API": "/ip4/0.0.0.0/tcp/5001",
"Announce": [],
"AppendAnnounce": [],
"Gateway": "/ip4/0.0.0.0/tcp/18080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001",
"/ip4/0.0.0.0/udp/4001/quic",
"/ip6/::/udp/4001/quic"
]
},
"AutoNAT": {},
"Bootstrap": [
"/ip4/162.52.15.32/tcp/4001/ipfs/12D3KooWMsEzYYzHi7kNKjgZu95TbVfUQc5N8whi1jnZWr8Gooup"
],
"DNS": {
"Resolvers": {}
},
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "9000GB"
},
"Discovery": {
"MDNS": {
"Enabled": false
}
},
"Experimental": {
"AcceleratedDHTClient": false,
"FilestoreEnabled": false,
"GraphsyncEnabled": false,
"Libp2pStreamMounting": false,
"P2pHttpProxy": false,
"StrategicProviding": false,
"UrlstoreEnabled": false
},
"Gateway": {
"APICommands": [],
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range",
"User-Agent"
],
"Access-Control-Allow-Methods": [
"PUT",
"GET",
"POST"
],
"Access-Control-Allow-Origin": [
"*"
]
},
"NoDNSLink": false,
"NoFetch": false,
"PathPrefixes": [],
"PublicGateways": null,
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "12D3KooWMKTT1VNHU9ByC9GgzBiDQqqmX8iMz3rnN2oak3Rvik1i"
},
"Internal": {},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Migration": {
"DownloadSources": [],
"Keep": ""
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Peering": {
"Peers": null
},
"Pinning": {
"RemoteServices": {}
},
"Plugins": {
"Plugins": null
},
"Provider": {
"Strategy": ""
},
"Pubsub": {
"DisableSigning": false,
"Router": ""
},
"Reprovider": {
"Interval": "12h",
"Strategy": "all"
},
"Routing": {
"Methods": null,
"Routers": null,
"Type": "dht"
},
"Swarm": {
"AddrFilters": [],
"ConnMgr": {
"GracePeriod": "20s",
"HighWater": 900,
"LowWater": 600,
"Type": "basic"
},
"DisableBandwidthMetrics": false,
"DisableNatPortMap": true,
"RelayClient": {},
"RelayService": {},
"ResourceMgr": {},
"Transports": {
"Multiplexers": {},
"Network": {},
"Security": {}
}
}
}
Description
I found that my IPFS would cause a large number of memory leaks after running for a period of time, so I used the go tool pprof ipfs.heap to analyze it. May I ask if you have encountered the same situation before, or what operation caused these memory leaks, and how to avoid them
go tool pprof ipfs.heap File: ipfs Build ID: 957c4935f18c61c943d7fd70f5daa89cc3833e52 Type: inuse_space Time: Oct 17, 2024 at 9:03am (UTC) Entering interactive mode (type "help" for commands, "o" for options) (pprof) top 10 Showing nodes accounting for 742.45MB, 94.65% of 784.42MB total Dropped 157 nodes (cum <= 3.92MB) Showing top 10 nodes out of 123 flat flat% sum% cum cum% 340.32MB 43.39% 43.39% 340.32MB 43.39% github.com/ipfs/go-merkledag.(*ProtoNode).Copy 329.27MB 41.98% 85.36% 329.27MB 41.98% google.golang.org/protobuf/encoding/protowire.AppendBytes (inline) 19.02MB 2.42% 87.79% 355.29MB 45.29% github.com/ipfs/go-merkledag.(*ProtoNode).marshalImmutable