estuary
estuary copied to clipboard
Investigate/fix resource extensive goroutines, functions and SQLs
I've been investigating the resource extensive go routines and functions. I think it's worth investigating and if possible, optimize these process.
Here is a quick snapshot of profiler (pprof):
Showing top 20 nodes out of 57
flat flat% sum% cum cum%
14035.03MB 46.41% 46.41% 14035.03MB 46.41% reflect.unsafe_NewArray
5820.39MB 19.25% 65.65% 5820.39MB 19.25% github.com/jackc/pgtype.scanPlanString.Scan
3359.74MB 11.11% 76.76% 3359.74MB 11.11% main.(*ContentManager).pinContentOnShuttle
2507.11MB 8.29% 85.05% 2509.11MB 8.30% github.com/ipfs/go-cid.CidFromBytes
934.49MB 3.09% 88.14% 1146.58MB 3.79% github.com/libp2p/go-libp2p-kad-dht/fullrt.(*FullRT).bulkMessageSend
380.52MB 1.26% 89.40% 1527.10MB 5.05% github.com/libp2p/go-libp2p-kad-dht/fullrt.(*FullRT).ProvideMany
379.01MB 1.25% 90.65% 599.02MB 1.98% github.com/hashicorp/golang-lru/simplelru.(*LRU).Add
306MB 1.01% 91.66% 1833.10MB 6.06% github.com/ipfs/go-ipfs-provider/batched.(*BatchProvidingSystem).Run.func1
275.03MB 0.91% 92.57% 275.03MB 0.91% reflect.New
232.01MB 0.77% 93.34% 233.01MB 0.77% github.com/ipfs/go-ipfs-blockstore.cacheKey (inline)
220.01MB 0.73% 94.07% 220.01MB 0.73% container/list.(*List).insertValue (inline)
39.50MB 0.13% 94.20% 354.51MB 1.17% github.com/ipfs/go-ipfs-blockstore.(*arccache).cacheHave (inline)
32MB 0.11% 94.30% 282.51MB 0.93% github.com/ipfs/go-ipfs-blockstore.(*arccache).queryCache
22.50MB 0.074% 94.38% 379.08MB 1.25% github.com/libp2p/go-libp2p-kad-dht/internal/net.(*messageSenderImpl).SendMessage
18.50MB 0.061% 94.44% 443.09MB 1.47% github.com/libp2p/go-libp2p-kad-dht/fullrt.(*FullRT).ProvideMany.func1
10MB 0.033% 94.47% 5830.39MB 19.28% github.com/jackc/pgx/v4/stdlib.(*Rows).Next.func16
3MB 0.0099% 94.48% 223.01MB 0.74% encoding/json.Unmarshal
3MB 0.0099% 94.49% 850.53MB 2.81% github.com/ipfs/go-bitswap/internal/decision.(*blockstoreManager).worker
0.51MB 0.0017% 94.49% 551.07MB 1.82% main.(*ContentManager).reBuildStagingZones
0.50MB 0.0017% 94.50% 161.84MB 0.54% github.com/lucas-clemente/quic-go.(*session).run
(pprof)
This line alone on the pinContentOnShuttle consumes 3GB
2.96GB 2.96GB 309: op := &pinner.PinningOperation{
. . 310: ContId: cont.ID,
. . 311: UserId: cont.UserID,
. . 312: Obj: cont.Cid.CID,
. . 313: Name: cont.Name,
. . 314: Peers: peers,
. . 315: Started: cont.CreatedAt,
. . 316: Status: types.PinningStatusQueued,
. . 317: Replace: replaceID,
. . 318: Location: handle,
. . 319: MakeDeal: makeDeal,
. . 320: Meta: cont.PinMeta,
. . 321: }
This is amazing @alvin-reyes ! Definitely on board for doing this.