Image Formatting Pipelines
Add support for image formatting pipelines to the Desktop version of DIVE.
The current pipeline running is located: https://github.com/Kitware/dive/blob/main/client/platform/desktop/backend/native/viame.ts#L34
We need to modify the filter for the pipeline list to allow these other types of Pipelines. The Pipeline list is retrieved here: https://github.com/Kitware/dive/blob/main/client/platform/desktop/backend/native/common.ts#L345 This would have to modified to probably provide a new category of pipelines for 'image modifications' or named something similar.
The process of running the pipeline will no longer require annotations but will take only an image-sequence and perform modifications. The output logic may be slightly different than the existing pipelines but for now just allow it to output the files to the processing directory. Similar to current pipelines you can open the output director to see what is created. This is called the workdir in the job interface.
For the initial version just worry about getting the processed images out. In the future we may want to modify it so it creates a new dataset with the updated images and imports it into the system. Start with the basic way of getting output first.
@mattdawkins Can we get a list of image formatting pipelines you would like to see in DIVE Desktop? Any examples would be helpful as well.
All pipelines which filter images or videos and produce new images have the filter_*.pipe name formatting in the VIAME pipelines folder (https://github.com/VIAME/VIAME/tree/main/configs/pipelines), two of importance would be filter_enhance and filter_draw_dets
There are also transcode pipelines given by transcode_*.pipe - these produce video files as output instead of images, but are also relevant here and related
@mattdawkins - can you provide an example of CLI commands for running some of the pipelines you want to be implemented?
Inputs:
- Same as detectors or any other pipes for both videos and image inputs
- Similar to some of the utilities pipelines, a subset of pipelines may also take input detections in the same format as those (e.g. draw detections on images)
Outputs for filter_*:
-s image_writer:file_name_template=frame%06d.png // <-- string for image names, will be over-written by actual names if input is an image list, I recommend just not editing this
-s image_writer:file_name_prefix=/link/to/some/folder/ // <-- prefix name which can either be a folder path or a folder + string to pre-append to all filenames or just a prefix string, I reccomend setting this to the output folder
Outputs for transode_*:
-s video_writer:video_filename =/full/path/to/video.mp4 // <-- recommend setting to include folder and filename, filename could just be the dataset name or some default string in the writer folder, these aren't as important
Looks like #1518 already allow-listed the filter_* and transcode_* pipelines for selection. I assume work still needs to be done here to save the output of these pipelines to the correct job folder? @BryonLewis
Looks like #1518 already allow-listed the filter_* and transcode_* pipelines for selection. I assume work still needs to be done here to save the output of these pipelines to the correct job folder? @BryonLewis
Yeah, I don't remember writing anything to support those pipelines. Ideally they would create a new dataset. This may bring up options that are needed like the name of the new dataset.
Yeah, I don't remember writing anything to support those pipelines. Ideally they would create a new dataset. This may bring up options that are needed like the name of the new dataset.
Are there already pipelines that require additional input from a user (other than just selecting the pipeline/dataset)? Or is this a completely new idea?
Are there already pipelines that require additional input from a user (other than just selecting the pipeline/dataset)? Or is this a completely new idea?
There is the ability to have some inputs into training. I.E. another inputlist.txt file or fine-tune models.