ebpf
ebpf copied to clipboard
cache size of structs when finding size of slice of structs
#1255
In the current Go binary.Size() implementation, the size of struct types are cached to prevent subsequent reflect based encoding for the same types over and over again but aren't cached when encoding slices of structs. There is an allocation everytime when finding the size of slice of structs. See https://github.com/golang/go/issues/2320.
│ unmarshal-new.txt │ unmarshal-old.txt │
│ allocs/op │ allocs/op vs base │
Unmarshal/[]sysenc.explicitPad-8 0.000 ± ∞ ¹ 1.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Unmarshal/[]sysenc.explicitPad#01-8 0.000 ± ∞ ¹ 1.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Unmarshal/[]sysenc.explicitPad#02-8 0.000 ± ∞ ¹ 1.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Unmarshal/[]sysenc.struc-8 0.000 ± ∞ ¹ 2.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Unmarshal/[]sysenc.struc#01-8 1.000 ± ∞ ¹ 5.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Unmarshal/[]sysenc.struc#02-8 1.000 ± ∞ ¹ 5.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
│ marshal-new.txt │ marshal-old.txt │
│ allocs/op │ allocs/op vs base │
Marshal/[]sysenc.explicitPad-8 0.000 ± ∞ ¹ 1.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Marshal/[]sysenc.explicitPad#01-8 0.000 ± ∞ ¹ 1.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Marshal/[]sysenc.explicitPad#02-8 0.000 ± ∞ ¹ 1.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Marshal/[]sysenc.struc-8 0.000 ± ∞ ¹ 2.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Marshal/[]sysenc.struc#01-8 2.000 ± ∞ ¹ 6.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Marshal/[]sysenc.struc#02-8 2.000 ± ∞ ¹ 6.000 ± ∞ ¹ ~ (p=1.000 n=1) ³
Hi @kwakubiney, do you have any statistics on practical memory and CPU time savings? The ones you posted only show the amount of allocs, but that's vague.
I'm not sure if we want to carry a fork of the binary package. I can't see what exactly you've changed to achieve this improvement since it's all in one commit, but @lmb made similar changes upstream to speed up binary.Write iirc. If it's a straightforward patch, we can help you get some eyes on the CL to get it merged faster, and it should ship in the next minor Go release. IMO this is not sufficiently critical to justify carrying a fork of the binary package. (it will end up in importers' vendor/ dirs for a while, etc..)
Hi @kwakubiney, do you have any statistics on practical memory and CPU time savings? The ones you posted only show the amount of allocs, but that's vague.
I'm not sure if we want to carry a fork of the binary package. I can't see what exactly you've changed to achieve this improvement since it's all in one commit, but @lmb made similar changes upstream to speed up binary.Write iirc. If it's a straightforward patch, we can help you get some eyes on the CL to get it merged faster, and it should ship in the next minor Go release. IMO this is not sufficiently critical to justify carrying a fork of the binary package. (it will end up in importers'
vendor/dirs for a while, etc..)
Appreciate the feedback! I have been thinking about it and the concerns you & @florianl raised are valid. This is not critical enough to do away with the upstream encoding/binary package. I'll come up with much more practical benchmarks soon. I will also draft a CL and a proposal and share with the team on Slack so any input can be made before I post it on the Go issue board.
@kwakubiney That sounds great, thank you!
Issue is upstream. https://github.com/golang/go/issues/66253
Edit: Merged now
Is it okay to close this now since the change has been merged upstream? @lmb
Yes, I think that is fine. Thank you for the upstream contribution!