dive
dive copied to clipboard
Refactor Sorted Annotations
A problem involving large data (~130,000 tracks) which included a large amount sub-data (multiple polygon annotations per track) was crashing during loading.
I narrowed down the problem to the fact that the baseAnnotationStore
which is the basis for the trackStore
and groupStore
has a computed property called sorted
which was passing around a direct copy of each Track. The sorted property was used to display tracks in the track list, the event timeline, and multiple other areas. Each time there may need be another computed property based on this list which may require either forEach
or map
for all of the values.
Initially I have changed the sorted
computed property to not sort and create tracks until a call to setEnableSorting
. For track lists that are < 20000 I've enabled it by default. For greater than it will be enabled once all cameras are loaded and ready. The intermediate sorting as groups of tracks were added were slowing down the loading process.
I decided the remedy was to modify the sorted
computed property to return only the data inside of Tracks/Groups that are needed. They happened to be the id, begin, end, confidencePairs
and a small functiongetType
which gets the current type of the annotation or returns 'unknown'. I created a new type called SortedAnnotation
which has only this data available on it.
This prevents bounds/geometry/attributes/features and all the data from being copied and passed around.
AnnotationWithContext
is another type which is based on sorted
so everything that used AnnotationWithContext
needed to be modified to to use SortedAnnotation[]
The BaseFilterControl
used two functions called setType
and removeTypes
on the tracks themselves for when they were being edited. I moved those functions into the root cameraStore
and provided them to the BaseFilterControl
. You will not that I have to wrap those functions in the Viewer.vue to maintain the proper this
context. You can see that in Viewer.vue this is done for the remove
function as well.
useEventChart
had a section in the computed value where it would get markers
. Markers were an indication of a feature existing on aspecific frame or not. It happened to grab markers for all of the track data. The only time markers are used are when a track is selected for editing. I refactored this to use the getTracksMerged
from the cameraStore
only on selected tracks. This reduces greatly the amount of information being passed around in useEventChart
.
I changed the getType
that was typically referenced to make it return the string value instead of the confidencPair themselves. That's why in some location I removed the ending [0]
so it went from getType()[0]
to getType()
Finally the tests have been updated to reflect the new reorganization.