docs/feature: Nested ReturnToVTPool()
Because ReturnToVTPool doesn't recursively do the same for applicable field member's, it's easy to make a nasty mistake:
message Foo {
option (vtproto.mempool) = true;
repeated Bar bars = 1;
repeated Baz bazs = 2;
}
message Bar {
option (vtproto.mempool) = true;
}
message Baz {
option (vtproto.mempool) = true;
}
for _, mm := range foo.Bars {
mm.ReturnToVTPool()
}
for _, mm := range foo.Bazs {
mm.ReturnToVTPool()
}
// Whoops — this does ResetVT() on things that may have been picked up from the pool already.
foo.ReturnToVTPool()
Rearranging the order doesn't help because the parent's slices are reset. The only ways to do this safely at the moment are:
- Copy the slices first, return the parent, then return the children.
- Do
defer mm.ReturnToVTPool()inside the loops. - Attach finalizers to child messages that return to the pool, release the parent only, and rely on best-effort release.
Leaving up to you all if this requires code or documentation clarity — I'm not sure if this is intended for explicitness when dealing with pooled objects. But having some method of release recursively would make it much less error prone.
Ah, I see this is mentioned in #8 and #63.
@nockty: [We] need further information to tackle this issue. Namely, a benchmark showing that individually pooling objects can be more performant in certain scenarios.
Not sure if it helps, but in my use-case I can't decode the entire object at once.
The children objects reside as bytes in a ton of BigTable cells. The parent is only initialized after the children have finished decoding in-full. Pooling the children makes sense for us because these decodes have to be done separately as each result is read.
The best place to release everything is once the parent is no longer needed — thus recursive return being ideal for us.
The finalizers approach is IMO out of the question because they perform horribly in Go. I do agree this is a bit of a footgun in the pooling API, and it may not be enough to clarify the documentation. I think the recursive release would be the safest API we can provide, but I get the feeling it will require quite a bit of code. :sweat:
To clarify, I wasn't suggesting those steps as part of the API. Just the only levers users currently have available. :)
Hi @vmg , I'm also interested in this topic. Can you please help me resolve my two small doubts?
- Does the phrase
recursive releasemeans generating code liking this:
func (m *Foo) ResetVT() {
for _, mm := range m.Bars {
// mm.ResetVT()
mm.ReturnToVTPool() // if mm is poolable, use ReturnToVTPool instead of ResetVT
}
f0 := m.Bars[:0]
m.Reset()
m.Bars = f0
}
- Will pooling for map type be supported in the future? I think this might bring the benefits like reusing slice
message Foo {
option (vtproto.mempool) = true;
map<int64, Bar> barsMap = 1;
}
message Bar {
option (vtproto.mempool) = true;
}
func (m *Foo) ResetVT() {
keysToDelete := []int64{}
for key, value := range m.BarsMap {
value.ReturnToVTPool() // only put the values in sync.Pool
keysToDelete = append(keysToDelete, key)
}
for _, key := range keysToDelete {
delete(m.BarsMap, key)
}
f0 := m.BarsMap
m.Reset()
m.BarsMap = f0
}