widget: annotation overlays
Problem to solve
It is valuable to be able to annotate over video streams, imaging sonar, and various other environment visualisation widgets, without embedding those annotations into the source stream (which typically requires extra processing power and/or redundant functionality, and can corrupt the source data for later analysis).
Desired approach
An annotation widget with an API for being sent a stream of points, lines, polygons, circles, ellipses, and text, defined with relative coordinates (e.g. 0-100% for widget width/height, or perhaps -1 to +1, oriented around the widget's center).
The annotation syntax should include object IDs, so subsequent messages for an ID could delete it, move it, or redefine it, and multiple annotations could be stacked and managed independently in a straightforward manner. The widget itself needs an ID or name that it can be referred to by (which should not necessarily be unique, in case the same overlay should be used in multiple places), so an annotation source (which could be another Cockpit widget, or a BlueOS Extension for example) knows where to send information to it.
It makes sense for annotations to have a default but overridable colour, and we may wish to support non-text objects having an associated label, which could be automatically positioned and sized to be visible and minimally overlapping with other objects. Dedicated text annotations should support specifying a font size, and potentially common font adjustments like bold/italics and font family. Points and lines should ideally have specifiable widths, and shapes should have a fill opacity parameter (perhaps optional, defaulting to 0).
NOTE: It is somewhat natural to conceptually expand this idea to maps, but the implementation would need to be independent and would likely have minimal overlap, because it would need to be provided as a custom map layer (mentioned in #84), which the user could choose to enable on any given map widget.
Ideally received annotations would also be recorded, and (at user discretion) included in the generated subtitle files (as an extension of #450) that can be replayed with recorded videos. The SubStation Alpha syntax does allow for shape definitions, so this could be reasonably straightforward, and avoids users needing to bake their annotations into their video files.
We could also consider allowing snapshots for a video stream to optionally bake annotations into the captured images, to enable them to more easily be used in reports and academic papers and the like.
Additional context
Video annotations have been discussed multiple times on the forum, including here and here, and more tangentially mentioned here and here.
This issue is vaguely related to #1525, but that issue is focused on turning normal widgets into overlays, whereas this one is focused on a mechanism for generic annotation overlays.
Prerequisites
- [x] I have checked to make sure that a similar request has not already been filed or fixed.