support the cutoff training regime
How about aiding users in cut-off training, besides finetuning and from scratch?
We already have START_MODEL and NET_SPEC, so IIUC we would minimally only need to add some additional variable, say KEEP_LAYERS, for the --append_index parameter. If undefined, everything is kept (i.e. START_MODEL or NET_SPEC is reused as-is, which amounts to finetuning or from-scratch). But if defined, it would add --append_index $(KEEP_LAYERS) (and thus keep only that many layers of START_MODEL and then append new layers from NET_SPEC).
Of course, additional facilities like extracting a traineddata's net-spec, or preconfigured KEEP_LAYERS/NET_SPEC settings for typical scenarios could be offered, too.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.