gcpp
gcpp copied to clipboard
Array destructor
I've implemented the array destructor storage compaction with support for merging and splitting w.r.t. the comment on deferred_heap.h:77.
Although I've implemented the merging when a new object or array is added just after another array by incrementing the count instead of pushing another array_destructor, deferred_heap never allocates two blocks consecutively (the start block in the gpage), therefore, such a case never happens.
Thanks, I'll take a look. Note that it should be exercised for vector<T, deferred_allocator<T>>.
@hsutter, the registration of destructors is done in deferred_heap::construct and deferred_heap::construct_array, for single objects and arrays respectively. However, the deferred_allocator can only call the first one since the allocator interface doesn't have a construct method for array construction.
So, right now, deferred_vector (or anything that uses the deferred_allocator cannot benefit from the array_destructor optimization. I think there are 2 options:
- Either we can represent the single object destructors as an
array_destructorwith count 1. - Since
deferred_heapcan track whether an object at an address was allocated in an array allocation or a single object allocation, it can give a hint todestructors::storeso that we can try to compact 2 single object destructors to an array destructor.
I think option 1 would be space and time inefficient since we'll have a lot more array_destructors to search through to do compaction and 2 redundant size_ts per single object.
@FatihBAKIR Actually it is beneficial, in the intended use case is vector<T, deferred_allocator<T>>. In that use, the vector will get memory for N objects and then individually construct them in-place during push_backs (and similar) without any padding between the T objects. That's the case where this would be useful and we could simply bump the count of consecutive objects when the vector performs a push_back, which helps keep a push_back loop at O(N) instead of turning it into O(N^2) as it would be now by storing individual destructors.
Thanks again for this PR. For now I'm waiting for feedback and bug reports from actual use of the library, and deferring enhancements and optimizations until then but keeping them in the backlog.