sleap icon indicating copy to clipboard operation
sleap copied to clipboard

Training error in multi-animal top-down-model

Open pinjuu opened this issue 1 year ago • 9 comments

Bug description

When I try to train the model this error occurs:

  File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\tensorflow\python\framework\func_graph.py", line 1129, in autograph_handler
    raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:

    File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1621, in predict_function  *
        return step_function(self, iterator)
    File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1611, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1604, in run_step  **
        outputs = model.predict_step(data)
    File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1572, in predict_step
        return self(x, training=False)
    File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None

    TypeError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).

    in user code:

        File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 4102, in call  *
            crop_output = self.centroid_crop(example)
        File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler  **
            raise e.with_traceback(filtered_tb) from None

        TypeError: Exception encountered when calling layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth).

        in user code:

            File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 772, in call  *
                crops = sleap.nn.peak_finding.crop_bboxes(full_imgs, bboxes, crop_sample_inds)
            File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\sleap\nn\peak_finding.py", line 173, in crop_bboxes  *
                image_height = tf.shape(images)[1]

            TypeError: Failed to convert elements of tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64)) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.


        Call arguments received:
          • example_gt={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}


    Call arguments received:
      • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}

Expected behaviour

Succesfuly train the model

Actual behaviour

Training does not complete due to the error

Your personal set up

SLEAP v1.3.3

Environment packages
# paste output of `pip freeze` or `conda list` here
Logs
# paste relevant logs here, if any

Screenshots

How to reproduce

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

pinjuu avatar Oct 23 '24 14:10 pinjuu

Hi @pinjuu,

I will just need some more information from you.

How did you install SLEAP?

Please provide the command you are using to get this error.

Thanks!

Elizabeth

eberrigan avatar Oct 24 '24 02:10 eberrigan

Installation was conda package

Training config
{
  "_pipeline": "multi-animal top-down-id",
  "_ensure_channels": "",
  "outputs.run_name_prefix": "LBNcohort1_SIBody231024",
  "outputs.runs_folder": "C:/Users/spike/Desktop/sleap/SLEAP Projects\\models",
  "outputs.tags": "",
  "outputs.checkpointing.best_model": true,
  "outputs.checkpointing.latest_model": false,
  "outputs.checkpointing.final_model": false,
  "outputs.tensorboard.write_logs": false,
  "_save_viz": true,
  "_predict_frames": "suggested frames (1539 total frames)",
  "model.heads.centroid.sigma": 2.75,
  "model.heads.multi_class_topdown.confmaps.anchor_part": null,
  "model.heads.multi_class_topdown.confmaps.sigma": 5.0,
  "model.heads.centroid.anchor_part": null,
  "model.heads.centered_instance.anchor_part": null,
  "data.instance_cropping.center_on_part": null
}
{
  "data": {
    "labels": {
      "training_labels": "C:/Users/spike/Desktop/sleap/SLEAP Projects/SI_Cohort1_body.slp",
      "validation_labels": null,
      "validation_fraction": 0.1,
      "test_labels": null,
      "split_by_inds": false,
      "training_inds": [
        613,
        555,
        277,
        243,
        476,
        176,
        386,
        524,
        506,
        351,
        553,
        219,
        462,
        436,
        70,
        425,
        547,
        265,
        504,
        138,
        264,
        153,
        597,
        191,
        438,
        434,
        416,
        155,
        24,
        647,
        37,
        580,
        530,
        402,
        193,
        391,
        593,
        286,
        376,
        375,
        497,
        454,
        563,
        325,
        141,
        624,
        632,
        43,
        47,
        465,
        261,
        457,
        560,
        110,
        441,
        579,
        214,
        196,
        312,
        105,
        229,
        446,
        385,
        466,
        189,
        573,
        633,
        337,
        308,
        165,
        182,
        213,
        612,
        634,
        269,
        297,
        89,
        498,
        73,
        594,
        107,
        522,
        493,
        329,
        100,
        326,
        185,
        34,
        589,
        523,
        420,
        353,
        111,
        152,
        513,
        311,
        417,
        543,
        114,
        574,
        617,
        419,
        267,
        203,
        564,
        590,
        568,
        144,
        382,
        246,
        290,
        575,
        600,
        480,
        35,
        266,
        461,
        54,
        208,
        215,
        147,
        81,
        183,
        303,
        448,
        501,
        640,
        588,
        406,
        171,
        562,
        96,
        10,
        260,
        108,
        190,
        328,
        474,
        603,
        528,
        399,
        550,
        137,
        82,
        366,
        488,
        160,
        378,
        32,
        230,
        510,
        552,
        120,
        17,
        322,
        502,
        161,
        313,
        398,
        646,
        63,
        332,
        595,
        551,
        320,
        451,
        278,
        516,
        75,
        534,
        44,
        350,
        41,
        292,
        607,
        452,
        86,
        217,
        405,
        50,
        103,
        291,
        489,
        42,
        336,
        317,
        340,
        578,
        80,
        245,
        442,
        599,
        372,
        360,
        540,
        380,
        115,
        459,
        126,
        26,
        358,
        389,
        252,
        604,
        381,
        427,
        301,
        85,
        495,
        307,
        636,
        61,
        247,
        468,
        0,
        439,
        512,
        431,
        587,
        242,
        486,
        565,
        5,
        538,
        169,
        496,
        157,
        638,
        293,
        403,
        135,
        197,
        251,
        521,
        377,
        621,
        228,
        629,
        148,
        637,
        94,
        503,
        455,
        585,
        542,
        124,
        117,
        11,
        428,
        271,
        287,
        131,
        156,
        78,
        544,
        341,
        45,
        401,
        72,
        56,
        482,
        66,
        370,
        361,
        300,
        275,
        440,
        306,
        248,
        626,
        392,
        235,
        469,
        334,
        608,
        253,
        475,
        122,
        525,
        145,
        30,
        413,
        234,
        159,
        545,
        333,
        59,
        279,
        412,
        635,
        280,
        233,
        184,
        396,
        374,
        28,
        324,
        91,
        487,
        226,
        150,
        511,
        58,
        40,
        255,
        395,
        345,
        133,
        109,
        500,
        338,
        355,
        281,
        388,
        354,
        598,
        136,
        289,
        616,
        139,
        357,
        384,
        299,
        285,
        426,
        463,
        433,
        201,
        223,
        12,
        532,
        140,
        514,
        163,
        102,
        218,
        211,
        92,
        620,
        49,
        499,
        227,
        195,
        21,
        481,
        359,
        539,
        9,
        186,
        373,
        128,
        142,
        3,
        270,
        421,
        554,
        52,
        134,
        435,
        397,
        576,
        212,
        164,
        648,
        273,
        470,
        304,
        238,
        173,
        149,
        494,
        118,
        364,
        172,
        288,
        478,
        394,
        318,
        437,
        168,
        549,
        236,
        33,
        400,
        210,
        335,
        611,
        644,
        60,
        453,
        445,
        609,
        529,
        87,
        298,
        343,
        422,
        84,
        483,
        64,
        414,
        321,
        69,
        309,
        315,
        154,
        200,
        449,
        348,
        586,
        123,
        369,
        561,
        390,
        2,
        231,
        256,
        302,
        491,
        569,
        1,
        363,
        257,
        55,
        249,
        619,
        254,
        19,
        232,
        98,
        410,
        119,
        127,
        650,
        519,
        46,
        533,
        198,
        23,
        331,
        258,
        591,
        566,
        371,
        216,
        367,
        125,
        472,
        379,
        162,
        222,
        346,
        53,
        368,
        505,
        27,
        464,
        408,
        113,
        51,
        4,
        430,
        627,
        13,
        387,
        583,
        22,
        146,
        596,
        31,
        316,
        263,
        404,
        365,
        18,
        225,
        537,
        88,
        179,
        330,
        7,
        68,
        170,
        415,
        546,
        132,
        268,
        194,
        79,
        25,
        456,
        526,
        129,
        202,
        175,
        178,
        106,
        305,
        205,
        14,
        548,
        282,
        557,
        71,
        606,
        116,
        577,
        90,
        29,
        67,
        536,
        262,
        344,
        460,
        424,
        477,
        610,
        559,
        166,
        250,
        167,
        535,
        485,
        582,
        187,
        630,
        641,
        206,
        584,
        181,
        121,
        342,
        104,
        622,
        484,
        432,
        48,
        93,
        8,
        615,
        339,
        407,
        444,
        447,
        272,
        319,
        347,
        276,
        507,
        239,
        174,
        531,
        520,
        158,
        349,
        411,
        643,
        284,
        74,
        443,
        418,
        101,
        221,
        57,
        259,
        143,
        450,
        65,
        623,
        556,
        509,
        237,
        207,
        625,
        605,
        83,
        572,
        515,
        130,
        356,
        490,
        6,
        645,
        151,
        492,
        112
      ],
      "validation_inds": [
        296,
        62,
        558,
        180,
        508,
        244,
        649,
        383,
        628,
        473,
        479,
        467,
        15,
        527,
        631,
        517,
        95,
        352,
        97,
        541,
        220,
        362,
        16,
        294,
        274,
        639,
        458,
        471,
        518,
        423,
        177,
        295,
        567,
        310,
        76,
        77,
        283,
        571,
        592,
        602,
        614,
        209,
        20,
        99,
        323,
        570,
        199,
        39,
        240,
        642,
        327,
        241,
        314,
        224,
        204,
        188,
        192,
        36,
        393,
        581,
        38,
        601,
        429,
        618,
        409
      ],
      "test_inds": null,
      "search_path_hints": [
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        "",
        ""
      ],
      "skeletons": []
    },
    "preprocessing": {
      "ensure_rgb": false,
      "ensure_grayscale": false,
      "imagenet_mode": null,
      "input_scaling": 0.5,
      "pad_to_stride": 16,
      "resize_and_pad_to_target": true,
      "target_height": 1024,
      "target_width": 1280
    },
    "instance_cropping": {
      "center_on_part": null,
      "crop_size": null,
      "crop_size_detection_padding": 16
    }
  },
  "model": {
    "backbone": {
      "leap": null,
      "unet": {
        "stem_stride": null,
        "max_stride": 16,
        "output_stride": 2,
        "filters": 16,
        "filters_rate": 2.0,
        "middle_block": true,
        "up_interpolate": true,
        "stacks": 1
      },
      "hourglass": null,
      "resnet": null,
      "pretrained_encoder": null
    },
    "heads": {
      "single_instance": null,
      "centroid": {
        "anchor_part": null,
        "sigma": 2.75,
        "output_stride": 2,
        "loss_weight": 1.0,
        "offset_refinement": false
      },
      "centered_instance": null,
      "multi_instance": null,
      "multi_class_bottomup": null,
      "multi_class_topdown": null
    },
    "base_checkpoint": null
  },
  "optimization": {
    "preload_data": true,
    "augmentation_config": {
      "rotate": true,
      "rotation_min_angle": -15.0,
      "rotation_max_angle": 15.0,
      "translate": false,
      "translate_min": -5,
      "translate_max": 5,
      "scale": false,
      "scale_min": 0.9,
      "scale_max": 1.1,
      "uniform_noise": false,
      "uniform_noise_min_val": 0.0,
      "uniform_noise_max_val": 10.0,
      "gaussian_noise": false,
      "gaussian_noise_mean": 5.0,
      "gaussian_noise_stddev": 1.0,
      "contrast": false,
      "contrast_min_gamma": 0.5,
      "contrast_max_gamma": 2.0,
      "brightness": false,
      "brightness_min_val": 0.0,
      "brightness_max_val": 10.0,
      "random_crop": false,
      "random_crop_height": 256,
      "random_crop_width": 256,
      "random_flip": false,
      "flip_horizontal": false
    },
    "online_shuffling": true,
    "shuffle_buffer_size": 128,
    "prefetch": true,
    "batch_size": 4,
    "batches_per_epoch": 200,
    "min_batches_per_epoch": 200,
    "val_batches_per_epoch": 10,
    "min_val_batches_per_epoch": 10,
    "epochs": 200,
    "optimizer": "adam",
    "initial_learning_rate": 0.0001,
    "learning_rate_schedule": {
      "reduce_on_plateau": true,
      "reduction_factor": 0.5,
      "plateau_min_delta": 1e-06,
      "plateau_patience": 5,
      "plateau_cooldown": 3,
      "min_learning_rate": 1e-08
    },
    "hard_keypoint_mining": {
      "online_mining": false,
      "hard_to_easy_ratio": 2.0,
      "min_hard_keypoints": 2,
      "max_hard_keypoints": null,
      "loss_scale": 5.0
    },
    "early_stopping": {
      "stop_training_on_plateau": true,
      "plateau_min_delta": 1e-08,
      "plateau_patience": 20
    }
  },
  "outputs": {
    "save_outputs": true,
    "run_name": null,
    "run_name_prefix": "LBNcohort1_SIBody231024",
    "run_name_suffix": null,
    "runs_folder": "C:/Users/spike/Desktop/sleap/SLEAP Projects\\models",
    "tags": [
      ""
    ],
    "save_visualizations": true,
    "delete_viz_images": true,
    "zip_outputs": false,
    "log_to_csv": true,
    "checkpointing": {
      "initial_model": false,
      "best_model": true,
      "every_epoch": false,
      "latest_model": false,
      "final_model": false
    },
    "tensorboard": {
      "write_logs": false,
      "loss_frequency": "epoch",
      "architecture_graph": false,
      "profile_graph": false,
      "visualizations": true
    },
    "zmq": {
      "subscribe_to_controller": true,
      "controller_address": "tcp://127.0.0.1:9000",
      "controller_polling_timeout": 10,
      "publish_updates": true,
      "publish_address": "tcp://127.0.0.1:9001"
    }
  },
  "name": "",
  "description": "",
  "sleap_version": "1.3.3",
  "filename": "C:/Users/spike/Desktop/sleap/SLEAP Projects\\models\\LBNcohort1_SIBody231024241023_102341.centroid.n=651\\training_config.json"
}
{
  "data": {
    "labels": {
      "training_labels": "C:/Users/spike/Desktop/sleap/SLEAP Projects/SI_Cohort1_body.slp",
      "validation_labels": null,
      "validation_fraction": 0.1,
      "test_labels": null,
      "split_by_inds": false,
      "training_inds": [
        277,
        530,
        143,
        611,
        40,
        22,
        402,
        616,
        174,
        561,
        448,
        610,
        154,
        475,
        558,
        123,
        191,
        41,
        384,
        458,
        483,
        605,
        603,
        241,
        393,
        120,
        540,
        127,
        15,
        200,
        296,
        107,
        7,
        460,
        579,
        318,
        299,
        620,
        434,
        205,
        85,
        553,
        437,
        479,
        17,
        308,
        578,
        351,
        0,
        383,
        614,
        359,
        365,
        298,
        188,
        26,
        79,
        340,
        428,
        638,
        629,
        566,
        259,
        271,
        484,
        101,
        604,
        342,
        99,
        348,
        454,
        76,
        494,
        622,
        240,
        124,
        266,
        375,
        122,
        426,
        417,
        237,
        503,
        396,
        404,
        90,
        456,
        278,
        592,
        68,
        439,
        111,
        606,
        432,
        341,
        443,
        25,
        78,
        134,
        353,
        369,
        269,
        19,
        335,
        261,
        419,
        198,
        50,
        210,
        31,
        500,
        66,
        495,
        641,
        546,
        95,
        158,
        190,
        398,
        575,
        110,
        183,
        464,
        131,
        491,
        164,
        465,
        3,
        223,
        118,
        229,
        326,
        583,
        80,
        630,
        368,
        70,
        468,
        534,
        355,
        486,
        619,
        82,
        45,
        455,
        425,
        272,
        221,
        297,
        273,
        378,
        562,
        317,
        168,
        30,
        146,
        481,
        42,
        502,
        305,
        309,
        270,
        279,
        559,
        627,
        421,
        142,
        574,
        24,
        5,
        598,
        422,
        560,
        399,
        441,
        292,
        488,
        524,
        51,
        33,
        108,
        388,
        331,
        112,
        322,
        81,
        387,
        236,
        544,
        337,
        370,
        635,
        408,
        93,
        516,
        643,
        265,
        162,
        527,
        452,
        374,
        49,
        557,
        71,
        300,
        173,
        333,
        515,
        104,
        376,
        531,
        642,
        354,
        328,
        125,
        29,
        117,
        185,
        98,
        438,
        382,
        323,
        344,
        430,
        521,
        231,
        60,
        645,
        20,
        412,
        590,
        601,
        16,
        433,
        492,
        295,
        47,
        514,
        523,
        389,
        394,
        114,
        522,
        607,
        310,
        46,
        361,
        232,
        38,
        596,
        207,
        571,
        325,
        429,
        136,
        91,
        130,
        222,
        147,
        570,
        256,
        589,
        406,
        424,
        528,
        116,
        519,
        386,
        233,
        94,
        303,
        330,
        445,
        304,
        56,
        197,
        257,
        588,
        226,
        497,
        217,
        477,
        166,
        377,
        364,
        52,
        247,
        61,
        413,
        149,
        213,
        637,
        409,
        595,
        246,
        526,
        459,
        280,
        631,
        11,
        227,
        268,
        252,
        58,
        547,
        293,
        283,
        238,
        473,
        103,
        532,
        255,
        62,
        446,
        249,
        613,
        284,
        501,
        23,
        102,
        92,
        542,
        13,
        264,
        201,
        332,
        225,
        487,
        379,
        397,
        513,
        319,
        196,
        182,
        506,
        9,
        517,
        324,
        628,
        362,
        618,
        74,
        195,
        115,
        1,
        181,
        416,
        372,
        573,
        245,
        427,
        577,
        212,
        161,
        133,
        97,
        474,
        113,
        235,
        54,
        469,
        75,
        401,
        194,
        48,
        202,
        59,
        466,
        151,
        155,
        77,
        106,
        624,
        567,
        137,
        211,
        489,
        504,
        621,
        639,
        53,
        518,
        320,
        435,
        286,
        444,
        86,
        644,
        580,
        507,
        14,
        485,
        418,
        156,
        529,
        634,
        420,
        391,
        461,
        623,
        548,
        291,
        204,
        496,
        132,
        334,
        586,
        67,
        597,
        253,
        536,
        537,
        228,
        626,
        552,
        554,
        403,
        138,
        414,
        447,
        538,
        367,
        572,
        541,
        43,
        357,
        153,
        288,
        533,
        636,
        525,
        214,
        34,
        224,
        327,
        172,
        215,
        239,
        129,
        163,
        216,
        440,
        289,
        505,
        199,
        462,
        21,
        345,
        258,
        177,
        87,
        581,
        490,
        478,
        593,
        600,
        275,
        187,
        178,
        511,
        4,
        350,
        139,
        363,
        148,
        184,
        356,
        358,
        165,
        339,
        220,
        608,
        244,
        290,
        285,
        192,
        450,
        555,
        294,
        380,
        539,
        463,
        311,
        72,
        564,
        169,
        591,
        405,
        12,
        203,
        6,
        248,
        321,
        615,
        498,
        556,
        100,
        39,
        234,
        315,
        313,
        69,
        576,
        316,
        119,
        267,
        159,
        302,
        410,
        65,
        274,
        44,
        457,
        36,
        371,
        171,
        551,
        28,
        276,
        175,
        392,
        451,
        63,
        27,
        336,
        263,
        219,
        602,
        105,
        145,
        480,
        453,
        352,
        150,
        640,
        520,
        329,
        535,
        390,
        415,
        360,
        8,
        633,
        170,
        301,
        73,
        314,
        423,
        160,
        32,
        543,
        312,
        64,
        400,
        509,
        167,
        550,
        55,
        57,
        470,
        609,
        625,
        347,
        84,
        582,
        189,
        508,
        569,
        135,
        385,
        287,
        126,
        37,
        510,
        208,
        476,
        281,
        96,
        218,
        617,
        411
      ],
      "validation_inds": [
        493,
        243,
        338,
        632,
        482,
        262,
        89,
        141,
        346,
        193,
        83,
        584,
        128,
        140,
        349,
        250,
        18,
        467,
        35,
        585,
        563,
        2,
        449,
        565,
        251,
        612,
        179,
        254,
        366,
        260,
        176,
        282,
        144,
        186,
        499,
        568,
        594,
        157,
        599,
        471,
        472,
        436,
        242,
        587,
        306,
        549,
        431,
        121,
        545,
        373,
        209,
        230,
        442,
        10,
        395,
        180,
        206,
        381,
        152,
        512,
        343,
        109,
        407,
        307,
        88
      ],
      "test_inds": null,
      "search_path_hints": [
        "",
        "",
        "",
        "",
        "",
        "",
        ""
      ],
      "skeletons": []
    },
    "preprocessing": {
      "ensure_rgb": false,
      "ensure_grayscale": false,
      "imagenet_mode": null,
      "input_scaling": 1.0,
      "pad_to_stride": 16,
      "resize_and_pad_to_target": true,
      "target_height": 1080,
      "target_width": 1080
    },
    "instance_cropping": {
      "center_on_part": null,
      "crop_size": 272,
      "crop_size_detection_padding": 16
    }
  },
  "model": {
    "backbone": {
      "leap": null,
      "unet": {
        "stem_stride": null,
        "max_stride": 16,
        "output_stride": 2,
        "filters": 64,
        "filters_rate": 2.0,
        "middle_block": true,
        "up_interpolate": false,
        "stacks": 1
      },
      "hourglass": null,
      "resnet": null,
      "pretrained_encoder": null
    },
    "heads": {
      "single_instance": null,
      "centroid": null,
      "centered_instance": null,
      "multi_instance": null,
      "multi_class_bottomup": null,
      "multi_class_topdown": {
        "confmaps": {
          "anchor_part": null,
          "part_names": [
            "nose1",
            "neck1",
            "earL1",
            "earR1",
            "forelegL1",
            "forelegR1",
            "tailstart1",
            "hindlegL1",
            "hindlegR1",
            "tail1",
            "tailend1"
          ],
          "sigma": 5.0,
          "output_stride": 2,
          "loss_weight": 1.0,
          "offset_refinement": false
        },
        "class_vectors": {
          "classes": [
            "1",
            "2"
          ],
          "num_fc_layers": 3,
          "num_fc_units": 64,
          "global_pool": true,
          "output_stride": 16,
          "loss_weight": 1.0
        }
      }
    },
    "base_checkpoint": null
  },
  "optimization": {
    "preload_data": true,
    "augmentation_config": {
      "rotate": false,
      "rotation_min_angle": -180.0,
      "rotation_max_angle": 180.0,
      "translate": false,
      "translate_min": -5,
      "translate_max": 5,
      "scale": false,
      "scale_min": 0.9,
      "scale_max": 1.1,
      "uniform_noise": false,
      "uniform_noise_min_val": 0.0,
      "uniform_noise_max_val": 10.0,
      "gaussian_noise": false,
      "gaussian_noise_mean": 5.0,
      "gaussian_noise_stddev": 1.0,
      "contrast": false,
      "contrast_min_gamma": 0.5,
      "contrast_max_gamma": 2.0,
      "brightness": false,
      "brightness_min_val": 0.0,
      "brightness_max_val": 10.0,
      "random_crop": false,
      "random_crop_height": 256,
      "random_crop_width": 256,
      "random_flip": false,
      "flip_horizontal": false
    },
    "online_shuffling": true,
    "shuffle_buffer_size": 128,
    "prefetch": true,
    "batch_size": 8,
    "batches_per_epoch": 200,
    "min_batches_per_epoch": 200,
    "val_batches_per_epoch": 10,
    "min_val_batches_per_epoch": 10,
    "epochs": 100,
    "optimizer": "adam",
    "initial_learning_rate": 0.0001,
    "learning_rate_schedule": {
      "reduce_on_plateau": true,
      "reduction_factor": 0.5,
      "plateau_min_delta": 1e-06,
      "plateau_patience": 5,
      "plateau_cooldown": 3,
      "min_learning_rate": 1e-08
    },
    "hard_keypoint_mining": {
      "online_mining": false,
      "hard_to_easy_ratio": 2.0,
      "min_hard_keypoints": 2,
      "max_hard_keypoints": null,
      "loss_scale": 5.0
    },
    "early_stopping": {
      "stop_training_on_plateau": true,
      "plateau_min_delta": 1e-06,
      "plateau_patience": 10
    }
  },
  "outputs": {
    "save_outputs": true,
    "run_name": null,
    "run_name_prefix": "LBNcohort1_SIBody231024",
    "run_name_suffix": null,
    "runs_folder": "C:/Users/spike/Desktop/sleap/SLEAP Projects\\models",
    "tags": [
      ""
    ],
    "save_visualizations": true,
    "delete_viz_images": true,
    "zip_outputs": false,
    "log_to_csv": true,
    "checkpointing": {
      "initial_model": false,
      "best_model": true,
      "every_epoch": false,
      "latest_model": false,
      "final_model": false
    },
    "tensorboard": {
      "write_logs": false,
      "loss_frequency": "epoch",
      "architecture_graph": false,
      "profile_graph": false,
      "visualizations": true
    },
    "zmq": {
      "subscribe_to_controller": true,
      "controller_address": "tcp://127.0.0.1:9000",
      "controller_polling_timeout": 10,
      "publish_updates": true,
      "publish_address": "tcp://127.0.0.1:9001"
    }
  },
  "name": "",
  "description": "",
  "sleap_version": "1.3.3",
  "filename": "C:/Users/spike/Desktop/sleap/SLEAP Projects\\models\\LBNcohort1_SIBody231024241023_111429.multi_class_topdown.n=651\\training_config.json"
}

pinjuu avatar Oct 24 '24 11:10 pinjuu

It looks like your skeletons is an empty list. Are you able to open this project in the GUI and take a peek at the skeleton and labels?

eberrigan avatar Oct 24 '24 16:10 eberrigan

Image

I have labeled 651 frames in the project. Also I have trained this model before and it worked.

pinjuu avatar Oct 25 '24 11:10 pinjuu

I run into the same problem...same error message etc.



INFO:sleap.nn.training:Finished training loop. [93.6 min]
INFO:sleap.nn.training:Deleting visualization directory: C:/Users/Laura/PycharmProjects/sleap\models\new_tracks_lesions241205_130801.multi_class_topdown.n=1442\viz
Polling: C:/Users/Laura/PycharmProjects/sleap\models\new_tracks_lesions241205_130801.multi_class_topdown.n=1442\viz\validation.*.png
Traceback (most recent call last):
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\gui\widgets\imagedir.py", line 100, in poll
    self.load_video(video=self.video)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\gui\widgets\video.py", line 413, in load_video
    self.view.scene.setSceneRect(0, 0, video.width, video.height)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\io\video.py", line 1046, in __getattr__
    return getattr(self.backend, item)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\io\video.py", line 922, in width
    self._load_test_frame()
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\io\video.py", line 861, in _load_test_frame
    test_frame_ = self._load_idx(0)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\io\video.py", line 838, in _load_idx
    img = cv2.imread(self._get_filename(idx))
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\io\video.py", line 856, in _get_filename
    raise FileNotFoundError(f"Unable to locate file {idx}: {self.filenames[idx]}")
FileNotFoundError: Unable to locate file 0: C:/Users/Laura/PycharmProjects/sleap\models\new_tracks_lesions241205_130801.multi_class_topdown.n=1442\viz\validation.0025.png
INFO:sleap.nn.training:Saving evaluation metrics to model folder...
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% ETA: -:--:-- ?
Traceback (most recent call last):
  File "C:\Users\Laura\anaconda3\envs\sleap\Scripts\sleap-train-script.py", line 33, in <module>
    sys.exit(load_entry_point('sleap==1.3.4', 'console_scripts', 'sleap-train')())
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\training.py", line 2014, in main
    trainer.train()
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\training.py", line 953, in train
    self.evaluate()
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\training.py", line 966, in evaluate
    split_name="train",
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\evals.py", line 744, in evaluate_model
    labels_pr: Labels = predictor.predict(labels_gt, make_labels=True)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 526, in predict
    self._make_labeled_frames_from_generator(generator, data)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 4458, in _make_labeled_frames_from_generator
    for ex in generator:
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 436, in _predict_generator
    ex = process_batch(ex)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 399, in process_batch
    preds = self.inference_model.predict_on_batch(ex, numpy=True)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 1069, in predict_on_batch
    outs = super().predict_on_batch(data, **kwargs)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\keras\engine\training.py", line 1986, in predict_on_batch
    outputs = self.predict_function(iterator)
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\tensorflow\python\framework\func_graph.py", line 1129, in autograph_handler
    raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:

    File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\keras\engine\training.py", line 1621, in predict_function  *
        return step_function(self, iterator)
    File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\keras\engine\training.py", line 1611, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\keras\engine\training.py", line 1604, in run_step  **
        outputs = model.predict_step(data)
    File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\keras\engine\training.py", line 1572, in predict_step
        return self(x, training=False)
    File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None

    TypeError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).

    in user code:

        File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 4102, in call  *
            crop_output = self.centroid_crop(example)
        File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler  **
            raise e.with_traceback(filtered_tb) from None

        TypeError: Exception encountered when calling layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth).

        in user code:

            File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 772, in call  *
                crops = sleap.nn.peak_finding.crop_bboxes(full_imgs, bboxes, crop_sample_inds)
            File "C:\Users\Laura\anaconda3\envs\sleap\lib\site-packages\sleap\nn\peak_finding.py", line 173, in crop_bboxes  *
                image_height = tf.shape(images)[1]

            TypeError: Failed to convert elements of tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64)) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.


        Call arguments received:
          • example_gt={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}


    Call arguments received:
      • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}


the model however is available to select when i want to run inference. Running inference also does not crash either and i get predictions.

this comes after i have slightly changed the videos (different size 1100x1000 instead of 1030x1100, not sure this is relevant though...)

i have tried: reinstalling sleap in a completely new environment: Software versions: SLEAP: 1.3.4 TensorFlow: 2.7.0 Numpy: 1.21.6 Python: 3.7.12 OS: Windows-10-10.0.22621-SP0

just thought i would give this a problem a push. Thank you already for any help :)

UPDATE: i just updated the error message. this is still bugging me quite substantially... i have a deadline coming up soon and wondered whether i could get any input about the source of this problem and whether there is a potential fix i can do on my end....

thanks already :) and sorry for pushing....

Lauraschwarz avatar Dec 04 '24 14:12 Lauraschwarz

Hi @Lauraschwarz,

I see in both cases this has appeared when using a TopDownMultiClassInferenceModel - I unfortunately do not have any multi-class models in my own inventory. If I am able to recreate the error myself, that would be fastest.

I am busy labeling Tracks on a test dataset right now to train a multiclass top down model, ~but if it happens you see this message before I finish my labeling/training, then perhaps you could upload to this form~

  1. ~(a trim of) the video you wanted to run inference on and~
  2. ~the models folder C:/Users/Laura/PycharmProjects/sleap\models\new_tracks_lesions241205_130801.multi_class_topdown.n=1442 AND~
  3. ~the folder for whatever centroid model you were using.~

I'll update when I recreate so we don't do double work.

UPDATE 1: I have finished labeling tracks (and think we should add a Tab hotkey to switch between selected instances). UPDATE 2: I finished training on a 1024 x 1024 video and am running inference on a 720 x 720 video.


Inference only experiments

UPDATE 3: When I run inference on suggested frames in the project (all which come from the 1024 x 1024 video and have a mix of predicted and user-labeled instances) using this config

terminal config
Using already trained model for centroid: models/240831_104002.centroid.n=103/training_config.json
Using already trained model for multi_class_topdown: models/241205_101642.multi_class_topdown.n=100/training_config.json
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --only-suggested-frames -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64744 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_103129.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 10:31:34.640561
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
2024-12-05 10:31:34.835342: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 10:31:34.835521: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   'models': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'frames': '',
│   'only_labeled_frames': False,
│   'only_suggested_frames': True,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_103129.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
│   'tracking.tracker': 'none',
│   'tracking.max_tracking': None,
2024-12-05 10:31:35.972862: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,

I get the following error:

ValueError: Index 1 is not uniform
Traceback (most recent call last):
  File "/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-track", line 33, in <module>
    sys.exit(load_entry_point('sleap', 'console_scripts', 'sleap-track')())
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 5580, in main
    labels_pr = predictor.predict(provider)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 527, in predict
    self._make_labeled_frames_from_generator(generator, data)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4498, in _make_labeled_frames_from_generator
    for ex in generator:
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 458, in _predict_generator
    ex = process_batch(ex)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 400, in process_batch
    preds = self.inference_model.predict_on_batch(ex, numpy=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 1070, in predict_on_batch
    outs = super().predict_on_batch(data, **kwargs)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 2230, in predict_on_batch
    outputs = self.predict_function(iterator)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file_kbrmhsg.py", line 15, in tf__predict_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function
    outputs = model.distribute_strategy.run(run_step, args=(data,))
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step
    outputs = model.predict_step(data)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
    return self(x, training=False)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filej8oo1_ej.py", line 46, in tf__call
    crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file2zvir2w7.py", line 57, in tf__call
    imgs = ag__.converted_call(ag__.ld(self).preprocess, (ag__.ld(imgs),), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filejtyuy89h.py", line 67, in tf__preprocess
    ag__.if_stmt(ag__.ld(self).input_scale != 1.0, if_body_2, else_body_2, get_state_2, set_state_2, ('imgs',), 1)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filejtyuy89h.py", line 62, in if_body_2
    imgs = ag__.converted_call(ag__.ld(sleap).nn.data.resizing.resize_image, (ag__.ld(imgs), ag__.ld(self).input_scale), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filemuiw36bn.py", line 26, in tf__resize_image
    height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(image),), None, fscope)[-3]
ValueError: in user code:

    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1845, in predict_function  *
        return step_function(self, iterator)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step  **
        outputs = model.predict_step(data)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
        return self(x, training=False)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filej8oo1_ej.py", line 46, in tf__call
        crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file2zvir2w7.py", line 57, in tf__call
        imgs = ag__.converted_call(ag__.ld(self).preprocess, (ag__.ld(imgs),), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filejtyuy89h.py", line 67, in tf__preprocess
        ag__.if_stmt(ag__.ld(self).input_scale != 1.0, if_body_2, else_body_2, get_state_2, set_state_2, ('imgs',), 1)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filejtyuy89h.py", line 62, in if_body_2
        imgs = ag__.converted_call(ag__.ld(sleap).nn.data.resizing.resize_image, (ag__.ld(imgs), ag__.ld(self).input_scale), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filemuiw36bn.py", line 26, in tf__resize_image
        height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(image),), None, fscope)[-3]

    ValueError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).
    
    in user code:
    
        File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4142, in call  *
            crop_output = self.centroid_crop(example)
        File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler  **
            raise e.with_traceback(filtered_tb) from None
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file2zvir2w7.py", line 57, in tf__call
            imgs = ag__.converted_call(ag__.ld(self).preprocess, (ag__.ld(imgs),), None, fscope)
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filejtyuy89h.py", line 67, in tf__preprocess
            ag__.if_stmt(ag__.ld(self).input_scale != 1.0, if_body_2, else_body_2, get_state_2, set_state_2, ('imgs',), 1)
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filejtyuy89h.py", line 62, in if_body_2
            imgs = ag__.converted_call(ag__.ld(sleap).nn.data.resizing.resize_image, (ag__.ld(imgs), ag__.ld(self).input_scale), None, fscope)
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filemuiw36bn.py", line 26, in tf__resize_image
            height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(image),), None, fscope)[-3]
    
        ValueError: Exception encountered when calling layer "centroid_crop" (type CentroidCrop).
        
        in user code:
        
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 1770, in call  *
                imgs = self.preprocess(imgs)
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 951, in preprocess  *
                imgs = sleap.nn.data.resizing.resize_image(imgs, self.input_scale)
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/data/resizing.py", line 88, in resize_image  *
                height = tf.shape(image)[-3]
        
            ValueError: Index 1 is not uniform
        
        
        Call arguments received by layer "centroid_crop" (type CentroidCrop):
          • inputs={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)'}
    
    
    Call arguments received by layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel):
      • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)'}

│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}


UPDATE 4: When I run inference on 20 random frames in the 720 x 720 video using this config:

terminal config
Using already trained model for centroid: models/240831_104002.centroid.n=103/training_config.json
Using already trained model for multi_class_topdown: models/241205_101642.multi_class_topdown.n=100/training_config.json
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 1 --frames 59,202,236,241,244,287,360,536,558,616,674,678,688,712,721,784,789,799,903,965 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64772 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_103810.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 10:38:16.030061
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
2024-12-05 10:38:16.237919: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 10:38:16.238131: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   'models': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'frames': '59,202,236,241,244,287,360,536,558,616,674,678,688,712,721,784,789,799,903,965',
│   'only_labeled_frames': False,
│   'only_suggested_frames': False,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_103810.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '1',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
2024-12-05 10:38:17.254802: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.tracker': 'none',
│   'tracking.max_tracking': None,
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,
│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}

I get a success!

Predicted frames: 20/20
Finished inference at: 2024-12-05 10:38:20.604375
Total runtime: 4.574322938919067 secs
Predicted frames: 20/20
Provenance:
{
│   'model_paths': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
Process return code: 0

UPDATE 5: When I run inference on the entire current 720 x 720 video using this config:

terminal config
Using already trained model for centroid: models/240831_104002.centroid.n=103/training_config.json
Using already trained model for multi_class_topdown: models/241205_101642.multi_class_topdown.n=100/training_config.json
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 1 --frames 0,-1100 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64801 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_104320.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 10:43:25.439654
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
2024-12-05 10:43:25.640126: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 10:43:25.640317: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   'models': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'frames': '0,-1100',
│   'only_labeled_frames': False,
│   'only_suggested_frames': False,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_104320.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '1',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
2024-12-05 10:43:26.669936: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.tracker': 'none',
│   'tracking.max_tracking': None,
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,
│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}

I get a success!

Predicted frames: 1100/1101
Finished inference at: 2024-12-05 10:44:05.021864
Total runtime: 39.58221793174744 secs
Predicted frames: 1100/1101
Provenance:
{
│   'model_paths': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'predictor': 'TopDownMultiClassPredictor',
│   'sleap_version': '1.4.1a2',
│   'platform': 'macOS-13.5-arm64-arm-64bit',
│   'command': '/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 1 --frames 0,-1100 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64801 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_104320.predictions.slp --verbosity json --no-empty-frames',
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
│   'output_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_104320.predictions.slp',
│   'total_elapsed': 39.58221793174744,
Process return code: 0

UPDATE 6: When I run inference on random frames across both 1024 x 1024 and 720 x 720 videos using this config:

terminal config
Using already trained model for centroid: models/240831_104002.centroid.n=103/training_config.json
Using already trained model for multi_class_topdown: models/241205_101642.multi_class_topdown.n=100/training_config.json
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 0 --frames 100,104,141,228,299,331,373,399,513,529,586,614,629,732,816,881,1040 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64848 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_105848.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 10:58:54.218625
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
2024-12-05 10:58:54.449382: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 10:58:54.449519: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   'models': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'frames': '100,104,141,228,299,331,373,399,513,529,586,614,629,732,816,881,1040',
│   'only_labeled_frames': False,
│   'only_suggested_frames': False,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_105848.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '0',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
│   'tracking.tracker': 'none',
│   'tracking.max_tracking': None,
2024-12-05 10:58:55.532825: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,
│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}

I get a success!

separate random inferences for each video
Finished inference at: 2024-12-05 10:58:59.915924
Total runtime: 5.697307825088501 secs
Predicted frames: 17/17
Provenance:
{
│   'model_paths': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'predictor': 'TopDownMultiClassPredictor',
│   'sleap_version': '1.4.1a2',
│   'platform': 'macOS-13.5-arm64-arm-64bit',
│   'command': '/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 0 --frames 100,104,141,228,299,331,373,399,513,529,586,614,629,732,816,881,1040 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64848 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_105848.predictions.slp --verbosity json --no-empty-frames',
Process return code: 0
Finished inference at: 2024-12-05 10:59:10.351341
Total runtime: 4.458359003067017 secs
Predicted frames: 20/20
Process return code: 0

UPDATE 7: When I run inference on all videos (1024 x 1024 and 720 x 720) using this config:

terminal config (one for each video)
Using already trained model for centroid: models/240831_104002.centroid.n=103/training_config.json
Using already trained model for multi_class_topdown: models/241205_101642.multi_class_topdown.n=100/training_config.json
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 0 --frames 0,-1100 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64893 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110615.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 11:06:20.687200
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
2024-12-05 11:06:20.896135: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 11:06:20.896297: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   'models': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'frames': '0,-1100',
│   'only_labeled_frames': False,
│   'only_suggested_frames': False,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110615.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '0',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
2024-12-05 11:06:21.914947: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.tracker': 'none',
│   'tracking.max_tracking': None,
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,
│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 1 --frames 0,-1100 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64893 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110704.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 11:07:09.741516
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
│   'models': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
2024-12-05 11:07:09.958038: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 11:07:09.958187: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   'frames': '0,-1100',
│   'only_labeled_frames': False,
│   'only_suggested_frames': False,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110704.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '1',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
│   'tracking.tracker': 'none',
2024-12-05 11:07:11.013083: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.max_tracking': None,
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,
│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}

I get a success!

separate inferences for each video
Finished inference at: 2024-12-05 11:07:03.687138
Total runtime: 42.999948024749756 secs
Predicted frames: 1101/1101
Provenance:
{
│   'model_paths': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'predictor': 'TopDownMultiClassPredictor',
│   'sleap_version': '1.4.1a2',
│   'platform': 'macOS-13.5-arm64-arm-64bit',
│   'command': '/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 0 --frames 0,-1100 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64893 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110615.predictions.slp --verbosity json --no-empty-frames',
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
│   'output_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110615.predictions.slp',
│   'total_elapsed': 42.999948024749756,
Process return code: 0
Finished inference at: 2024-12-05 11:07:48.859407
Total runtime: 39.11790108680725 secs
Predicted frames: 1100/1101
Provenance:
{
│   'model_paths': ['models/240831_104002.centroid.n=103/training_config.json', 'models/241205_101642.multi_class_topdown.n=100/training_config.json'],
│   'predictor': 'TopDownMultiClassPredictor',
│   'sleap_version': '1.4.1a2',
│   'platform': 'macOS-13.5-arm64-arm-64bit',
│   'command': '/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --video.index 1 --frames 0,-1100 -m models/240831_104002.centroid.n=103/training_config.json -m models/241205_101642.multi_class_topdown.n=100/training_config.json --batch_size 4 --tracking.tracker none --controller_port 9000 --publish_port 64893 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110704.predictions.slp --verbosity json --no-empty-frames',
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
│   'output_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_110704.predictions.slp',
│   'total_elapsed': 39.11790108680725,
Process return code: 0

Training (and inference) experiments

UPDATE 8: When I re-run training using the loaded config from my original multiclass model with labeled frames and suggested frames (to predict on) from just the 1024 x 1024 video using this config:

terminal config
Start training centroid...
['sleap-train', '/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpleqanyti/241205_111620_training_job.json', '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp', '--zmq', '--controller_port', '9000', '--publish_port', '64945', '--save_viz']
INFO:sleap.nn.training:Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit
INFO:sleap.nn.training:Training labels file: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp
INFO:sleap.nn.training:Training profile: /var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpleqanyti/241205_111620_training_job.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
    "training_job_path": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpleqanyti/241205_111620_training_job.json",
    "labels_path": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
    "video_paths": [
        ""
    ],
    "val_labels": null,
    "test_labels": null,
    "base_checkpoint": null,
    "tensorboard": false,
    "save_viz": true,
    "keep_viz": false,
    "zmq": true,
    "publish_port": 64945,
    "controller_port": 9000,
    "run_name": "",
    "prefix": "",
    "suffix": "",
    "cpu": false,
    "first_gpu": false,
    "last_gpu": false,
    "gpu": "auto"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
    "data": {
        "labels": {
            "training_labels": null,
            "validation_labels": null,
            "validation_fraction": 0.1,
            "test_labels": null,
            "split_by_inds": false,
            "training_inds": null,
            "validation_inds": null,
            "test_inds": null,
            "search_path_hints": [],
            "skeletons": []
        },
        "preprocessing": {
            "ensure_rgb": false,
            "ensure_grayscale": false,
            "imagenet_mode": null,
            "input_scaling": 0.5,
            "pad_to_stride": null,
            "resize_and_pad_to_target": true,
            "target_height": null,
            "target_width": null
        },
        "instance_cropping": {
            "center_on_part": "abdomen",
            "crop_size": null,
            "crop_size_detection_padding": 16
        }
    },
    "model": {
        "backbone": {
            "leap": null,
            "unet": {
                "stem_stride": null,
                "max_stride": 16,
                "output_stride": 2,
                "filters": 16,
                "filters_rate": 2.0,
                "middle_block": true,
                "up_interpolate": true,
                "stacks": 1
            },
            "hourglass": null,
            "resnet": null,
            "pretrained_encoder": null
        },
        "heads": {
            "single_instance": null,
            "centroid": {
                "anchor_part": "abdomen",
                "sigma": 2.5,
                "output_stride": 2,
                "loss_weight": 1.0,
                "offset_refinement": false
            },
            "centered_instance": null,
            "multi_instance": null,
            "multi_class_bottomup": null,
            "multi_class_topdown": null
        },
        "base_checkpoint": null
    },
    "optimization": {
        "preload_data": true,
        "augmentation_config": {
            "rotate": true,
            "rotation_min_angle": -180.0,
            "rotation_max_angle": 180.0,
            "translate": false,
            "translate_min": -5,
            "translate_max": 5,
            "scale": false,
            "scale_min": 0.9,
            "scale_max": 1.1,
            "uniform_noise": false,
            "uniform_noise_min_val": 0.0,
            "uniform_noise_max_val": 10.0,
            "gaussian_noise": false,
            "gaussian_noise_mean": 5.0,
            "gaussian_noise_stddev": 1.0,
            "contrast": false,
            "contrast_min_gamma": 0.5,
            "contrast_max_gamma": 2.0,
            "brightness": false,
            "brightness_min_val": 0.0,
            "brightness_max_val": 10.0,
            "random_crop": false,
            "random_crop_height": 256,
            "random_crop_width": 256,
            "random_flip": true,
            "flip_horizontal": false
        },
        "online_shuffling": true,
        "shuffle_buffer_size": 128,
        "prefetch": true,
        "batch_size": 1,
        "batches_per_epoch": null,
        "min_batches_per_epoch": 200,
        "val_batches_per_epoch": null,
        "min_val_batches_per_epoch": 10,
        "epochs": 2,
        "optimizer": "adam",
        "initial_learning_rate": 0.0001,
        "learning_rate_schedule": {
            "reduce_on_plateau": true,
            "reduction_factor": 0.5,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 5,
            "plateau_cooldown": 3,
            "min_learning_rate": 1e-08
        },
        "hard_keypoint_mining": {
            "online_mining": false,
            "hard_to_easy_ratio": 2.0,
            "min_hard_keypoints": 2,
            "max_hard_keypoints": null,
            "loss_scale": 5.0
        },
        "early_stopping": {
            "stop_training_on_plateau": true,
            "plateau_min_delta": 1e-08,
            "plateau_patience": 20
        }
    },
    "outputs": {
        "save_outputs": true,
        "run_name": "241205_111620.centroid.n=100",
        "run_name_prefix": "",
        "run_name_suffix": "",
        "runs_folder": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models",
        "tags": [
            ""
        ],
        "save_visualizations": true,
        "keep_viz_images": false,
        "zip_outputs": false,
        "log_to_csv": true,
        "checkpointing": {
            "initial_model": false,
            "best_model": true,
            "every_epoch": false,
            "latest_model": false,
            "final_model": false
        },
        "tensorboard": {
            "write_logs": false,
            "loss_frequency": "epoch",
            "architecture_graph": false,
            "profile_graph": false,
            "visualizations": true
        },
        "zmq": {
            "subscribe_to_controller": true,
            "controller_address": "tcp://127.0.0.1:9000",
            "controller_polling_timeout": 10,
            "publish_updates": true,
            "publish_address": "tcp://127.0.0.1:64945"
        }
    },
    "name": "",
    "description": "",
    "sleap_version": "1.4.1a2",
    "filename": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpleqanyti/241205_111620_training_job.json"
}
Finished training centroid.
Resetting monitor window.
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_111722.multi_class_topdown.n=100/viz/validation.*.png
Start training multi_class_topdown...
['sleap-train', '/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp6djvsngw/241205_111722_training_job.json', '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp', '--zmq', '--controller_port', '9000', '--publish_port', '64945', '--save_viz']
INFO:sleap.nn.training:Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit
INFO:sleap.nn.training:Training labels file: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp
INFO:sleap.nn.training:Training profile: /var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp6djvsngw/241205_111722_training_job.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
    "training_job_path": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp6djvsngw/241205_111722_training_job.json",
    "labels_path": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
    "video_paths": [
        ""
    ],
    "val_labels": null,
    "test_labels": null,
    "base_checkpoint": null,
    "tensorboard": false,
    "save_viz": true,
    "keep_viz": false,
    "zmq": true,
    "publish_port": 64945,
    "controller_port": 9000,
    "run_name": "",
    "prefix": "",
    "suffix": "",
    "cpu": false,
    "first_gpu": false,
    "last_gpu": false,
    "gpu": "auto"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
    "data": {
        "labels": {
            "training_labels": "2004.v002.slp",
            "validation_labels": null,
            "validation_fraction": 0.1,
            "test_labels": null,
            "split_by_inds": false,
            "training_inds": [
                93,
                73,
                72,
                86,
                78,
                81,
                15,
                84,
                0,
                39,
                74,
                76,
                5,
                8,
                2,
                98,
                37,
                62,
                89,
                14,
                50,
                27,
                67,
                75,
                38,
                35,
                97,
                33,
                52,
                17,
                29,
                48,
                51,
                6,
                59,
                45,
                91,
                18,
                1,
                10,
                16,
                92,
                36,
                66,
                28,
                99,
                64,
                87,
                7,
                79,
                57,
                65,
                85,
                34,
                3,
                25,
                32,
                54,
                40,
                83,
                24,
                20,
                68,
                12,
                96,
                22,
                26,
                19,
                31,
                46,
                82,
                94,
                44,
                63,
                11,
                71,
                13,
                9,
                53,
                41,
                43,
                90,
                58,
                30,
                80,
                70,
                42,
                77,
                61,
                55
            ],
            "validation_inds": [
                95,
                60,
                69,
                47,
                56,
                21,
                4,
                23,
                49,
                88
            ],
            "test_inds": null,
            "search_path_hints": [
                ""
            ],
            "skeletons": []
        },
        "preprocessing": {
            "ensure_rgb": false,
            "ensure_grayscale": false,
            "imagenet_mode": null,
            "input_scaling": 1.0,
            "pad_to_stride": 16,
            "resize_and_pad_to_target": true,
            "target_height": 1024,
            "target_width": 1024
        },
        "instance_cropping": {
            "center_on_part": "abdomen",
            "crop_size": 144,
            "crop_size_detection_padding": 16
        }
    },
    "model": {
        "backbone": {
            "leap": null,
            "unet": {
                "stem_stride": null,
                "max_stride": 16,
                "output_stride": 2,
                "filters": 64,
                "filters_rate": 2.0,
                "middle_block": true,
                "up_interpolate": false,
                "stacks": 1
            },
            "hourglass": null,
            "resnet": null,
            "pretrained_encoder": null
        },
        "heads": {
            "single_instance": null,
            "centroid": null,
            "centered_instance": null,
            "multi_instance": null,
            "multi_class_bottomup": null,
            "multi_class_topdown": {
                "confmaps": {
                    "anchor_part": "abdomen",
                    "part_names": [
                        "head",
                        "thorax",
                        "abdomen",
                        "wingL",
                        "wingR",
                        "forelegL4",
                        "forelegR4",
                        "midlegL4",
                        "midlegR4",
                        "hindlegL4",
                        "hindlegR4",
                        "eyeL",
                        "eyeR"
                    ],
                    "sigma": 5.0,
                    "output_stride": 2,
                    "loss_weight": 1.0,
                    "offset_refinement": false
                },
                "class_vectors": {
                    "classes": [
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "2",
                        "1",
                        "1",
                        "2",
                        "2"
                    ],
                    "num_fc_layers": 3,
                    "num_fc_units": 64,
                    "global_pool": true,
                    "output_stride": 16,
                    "loss_weight": 1.0
                }
            }
        },
        "base_checkpoint": null
    },
    "optimization": {
        "preload_data": true,
        "augmentation_config": {
            "rotate": false,
            "rotation_min_angle": -180.0,
            "rotation_max_angle": 180.0,
            "translate": false,
            "translate_min": -5,
            "translate_max": 5,
            "scale": false,
            "scale_min": 0.9,
            "scale_max": 1.1,
            "uniform_noise": false,
            "uniform_noise_min_val": 0.0,
            "uniform_noise_max_val": 10.0,
            "gaussian_noise": false,
            "gaussian_noise_mean": 5.0,
            "gaussian_noise_stddev": 1.0,
            "contrast": false,
            "contrast_min_gamma": 0.5,
            "contrast_max_gamma": 2.0,
            "brightness": false,
            "brightness_min_val": 0.0,
            "brightness_max_val": 10.0,
            "random_crop": false,
            "random_crop_height": 256,
            "random_crop_width": 256,
            "random_flip": false,
            "flip_horizontal": false
        },
        "online_shuffling": true,
        "shuffle_buffer_size": 128,
        "prefetch": true,
        "batch_size": 8,
        "batches_per_epoch": 200,
        "min_batches_per_epoch": 200,
        "val_batches_per_epoch": 10,
        "min_val_batches_per_epoch": 10,
        "epochs": 100,
        "optimizer": "adam",
        "initial_learning_rate": 0.0001,
        "learning_rate_schedule": {
            "reduce_on_plateau": true,
            "reduction_factor": 0.5,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 5,
            "plateau_cooldown": 3,
            "min_learning_rate": 1e-08
        },
        "hard_keypoint_mining": {
            "online_mining": false,
            "hard_to_easy_ratio": 2.0,
            "min_hard_keypoints": 2,
            "max_hard_keypoints": null,
            "loss_scale": 5.0
        },
        "early_stopping": {
            "stop_training_on_plateau": true,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 10
        }
    },
    "outputs": {
        "save_outputs": true,
        "run_name": "241205_111722.multi_class_topdown.n=100",
        "run_name_prefix": "",
        "run_name_suffix": "",
        "runs_folder": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models",
        "tags": [
            ""
        ],
        "save_visualizations": true,
        "keep_viz_images": false,
        "zip_outputs": false,
        "log_to_csv": true,
        "checkpointing": {
            "initial_model": false,
            "best_model": true,
            "every_epoch": false,
            "latest_model": false,
            "final_model": false
        },
        "tensorboard": {
            "write_logs": false,
            "loss_frequency": "epoch",
            "architecture_graph": false,
            "profile_graph": false,
            "visualizations": true
        },
        "zmq": {
            "subscribe_to_controller": true,
            "controller_address": "tcp://127.0.0.1:9000",
            "controller_polling_timeout": 10,
            "publish_updates": true,
            "publish_address": "tcp://127.0.0.1:64945"
        }
    },
    "name": "",
    "description": "",
    "sleap_version": "1.4.1a2",
    "filename": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp6djvsngw/241205_111722_training_job.json"
}

I get the following error:

ValueError: Index 1 is not uniform
Epoch 20/100
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_111620.centroid.n=100/viz/validation.*.png
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_111722.multi_class_topdown.n=100/viz/validation.*.png
INFO:sleap.gui.widgets.monitor:Sending command to stop training.
INFO:sleap.nn.callbacks:Received control message: {'command': 'stop'}
2024-12-05 11:31:10.949115: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: 1 } dim { size: -14 } dim { size: -15 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -2 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -2 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" model: "0" num_cores: 10 environment { key: "cpu_instruction_set" value: "ARM NEON" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 16384 l2_cache_size: 524288 l3_cache_size: 524288 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -2 } dim { size: 144 } dim { size: 144 } dim { size: 1 } } }
200/200 - 17s - loss: 1.3238e-04 - CenteredInstanceConfmapsHead_loss: 1.3144e-04 - ClassVectorsHead_loss: 9.4864e-07 - ClassVectorsHead_accuracy: 1.0000 - val_loss: 0.0050 - val_CenteredInstanceConfmapsHead_loss: 0.0050 - val_ClassVectorsHead_loss: 8.0915e-05 - val_ClassVectorsHead_accuracy: 1.0000 - lr: 1.0000e-04 - 17s/epoch - 84ms/step
INFO:sleap.nn.training:Finished training loop. [13.6 min]
INFO:sleap.nn.training:Deleting visualization directory: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_111722.multi_class_topdown.n=100/viz
INFO:sleap.nn.training:Saving evaluation metrics to model folder...
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% ETA: -:--:-- ?Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_111620.centroid.n=100/viz/validation.*.png
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_111722.multi_class_topdown.n=100/viz/validation.*.png
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% ETA: -:--:-- ?
Traceback (most recent call last):
  File "/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-train", line 33, in <module>
    sys.exit(load_entry_point('sleap', 'console_scripts', 'sleap-train')())
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 2039, in main
    trainer.train()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 953, in train
    self.evaluate()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 961, in evaluate
    sleap.nn.evals.evaluate_model(
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/evals.py", line 744, in evaluate_model
    labels_pr: Labels = predictor.predict(labels_gt, make_labels=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 527, in predict
    self._make_labeled_frames_from_generator(generator, data)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4498, in _make_labeled_frames_from_generator
    for ex in generator:
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 437, in _predict_generator
    ex = process_batch(ex)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 400, in process_batch
    preds = self.inference_model.predict_on_batch(ex, numpy=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 1070, in predict_on_batch
    outs = super().predict_on_batch(data, **kwargs)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 2230, in predict_on_batch
    outputs = self.predict_function(iterator)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file52u9x81g.py", line 15, in tf__predict_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function
    outputs = model.distribute_strategy.run(run_step, args=(data,))
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step
    outputs = model.predict_step(data)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
    return self(x, training=False)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_fileq68d239t.py", line 46, in tf__call
    crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filerjuepznq.py", line 39, in tf__call
    crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filez9qgoih8.py", line 42, in tf__crop_bboxes
    image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
ValueError: in user code:

    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1845, in predict_function  *
        return step_function(self, iterator)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step  **
        outputs = model.predict_step(data)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
        return self(x, training=False)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_fileq68d239t.py", line 46, in tf__call
        crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filerjuepznq.py", line 39, in tf__call
        crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filez9qgoih8.py", line 42, in tf__crop_bboxes
        image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]

    ValueError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).
    
    in user code:
    
        File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4142, in call  *
            crop_output = self.centroid_crop(example)
        File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler  **
            raise e.with_traceback(filtered_tb) from None
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filerjuepznq.py", line 39, in tf__call
            crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filez9qgoih8.py", line 42, in tf__crop_bboxes
            image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
    
        ValueError: Exception encountered when calling layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth).
        
        in user code:
        
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 773, in call  *
                crops = sleap.nn.peak_finding.crop_bboxes(full_imgs, bboxes, crop_sample_inds)
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/peak_finding.py", line 173, in crop_bboxes  *
                image_height = tf.shape(images)[1]
        
            ValueError: Index 1 is not uniform
        
        
        Call arguments received by layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth):
          • example_gt={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}
    
    
    Call arguments received by layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel):
      • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}

UPDATE 9: When I re-run training using the loaded config from my original multiclass model with labeled frames from just the 1024 x 1024 video and predicting on random frames across both the 1024 x 1024 and 720 x 720 videos using this config:

terminal config
Using already trained model for centroid: models/241205_111620.centroid.n=100/training_config.json
Resetting monitor window.
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_114804.multi_class_topdown.n=100/viz/validation.*.png
Start training multi_class_topdown...
['sleap-train', '/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp5__ca1ee/241205_114804_training_job.json', '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp', '--zmq', '--controller_port', '9000', '--publish_port', '65094', '--save_viz']
INFO:sleap.nn.training:Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit
INFO:sleap.nn.training:Training labels file: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp
INFO:sleap.nn.training:Training profile: /var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp5__ca1ee/241205_114804_training_job.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
    "training_job_path": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp5__ca1ee/241205_114804_training_job.json",
    "labels_path": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
    "video_paths": [
        ""
    ],
    "val_labels": null,
    "test_labels": null,
    "base_checkpoint": null,
    "tensorboard": false,
    "save_viz": true,
    "keep_viz": false,
    "zmq": true,
    "publish_port": 65094,
    "controller_port": 9000,
    "run_name": "",
    "prefix": "",
    "suffix": "",
    "cpu": false,
    "first_gpu": false,
    "last_gpu": false,
    "gpu": "auto"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
    "data": {
        "labels": {
            "training_labels": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
            "validation_labels": null,
            "validation_fraction": 0.1,
            "test_labels": null,
            "split_by_inds": false,
            "training_inds": [
                7,
                67,
                62,
                76,
                26,
                88,
                31,
                70,
                25,
                36,
                40,
                1,
                50,
                6,
                58,
                27,
                37,
                59,
                10,
                78,
                17,
                19,
                53,
                98,
                28,
                44,
                69,
                22,
                90,
                80,
                13,
                65,
                38,
                45,
                55,
                3,
                43,
                57,
                89,
                66,
                48,
                51,
                99,
                47,
                92,
                97,
                72,
                4,
                56,
                35,
                8,
                34,
                12,
                49,
                23,
                61,
                85,
                60,
                20,
                91,
                30,
                41,
                75,
                14,
                79,
                42,
                52,
                18,
                32,
                83,
                16,
                5,
                71,
                81,
                93,
                2,
                95,
                21,
                74,
                94,
                54,
                24,
                86,
                68,
                29,
                96,
                11,
                77,
                15,
                9
            ],
            "validation_inds": [
                87,
                73,
                82,
                39,
                33,
                46,
                84,
                0,
                63,
                64
            ],
            "test_inds": null,
            "search_path_hints": [
                "",
                ""
            ],
            "skeletons": []
        },
        "preprocessing": {
            "ensure_rgb": false,
            "ensure_grayscale": false,
            "imagenet_mode": null,
            "input_scaling": 1.0,
            "pad_to_stride": 16,
            "resize_and_pad_to_target": true,
            "target_height": 1024,
            "target_width": 1024
        },
        "instance_cropping": {
            "center_on_part": "abdomen",
            "crop_size": 144,
            "crop_size_detection_padding": 16
        }
    },
    "model": {
        "backbone": {
            "leap": null,
            "unet": {
                "stem_stride": null,
                "max_stride": 16,
                "output_stride": 2,
                "filters": 64,
                "filters_rate": 2.0,
                "middle_block": true,
                "up_interpolate": false,
                "stacks": 1
            },
            "hourglass": null,
            "resnet": null,
            "pretrained_encoder": null
        },
        "heads": {
            "single_instance": null,
            "centroid": null,
            "centered_instance": null,
            "multi_instance": null,
            "multi_class_bottomup": null,
            "multi_class_topdown": {
                "confmaps": {
                    "anchor_part": "abdomen",
                    "part_names": [
                        "head",
                        "thorax",
                        "abdomen",
                        "wingL",
                        "wingR",
                        "forelegL4",
                        "forelegR4",
                        "midlegL4",
                        "midlegR4",
                        "hindlegL4",
                        "hindlegR4",
                        "eyeL",
                        "eyeR"
                    ],
                    "sigma": 5.0,
                    "output_stride": 2,
                    "loss_weight": 1.0,
                    "offset_refinement": false
                },
                "class_vectors": {
                    "classes": [
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "2",
                        "1",
                        "1",
                        "2",
                        "2"
                    ],
                    "num_fc_layers": 3,
                    "num_fc_units": 64,
                    "global_pool": true,
                    "output_stride": 16,
                    "loss_weight": 1.0
                }
            }
        },
        "base_checkpoint": null
    },
    "optimization": {
        "preload_data": true,
        "augmentation_config": {
            "rotate": false,
            "rotation_min_angle": -180.0,
            "rotation_max_angle": 180.0,
            "translate": false,
            "translate_min": -5,
            "translate_max": 5,
            "scale": false,
            "scale_min": 0.9,
            "scale_max": 1.1,
            "uniform_noise": false,
            "uniform_noise_min_val": 0.0,
            "uniform_noise_max_val": 10.0,
            "gaussian_noise": false,
            "gaussian_noise_mean": 5.0,
            "gaussian_noise_stddev": 1.0,
            "contrast": false,
            "contrast_min_gamma": 0.5,
            "contrast_max_gamma": 2.0,
            "brightness": false,
            "brightness_min_val": 0.0,
            "brightness_max_val": 10.0,
            "random_crop": false,
            "random_crop_height": 256,
            "random_crop_width": 256,
            "random_flip": false,
            "flip_horizontal": false
        },
        "online_shuffling": true,
        "shuffle_buffer_size": 128,
        "prefetch": true,
        "batch_size": 8,
        "batches_per_epoch": 200,
        "min_batches_per_epoch": 200,
        "val_batches_per_epoch": 10,
        "min_val_batches_per_epoch": 10,
        "epochs": 100,
        "optimizer": "adam",
        "initial_learning_rate": 0.0001,
        "learning_rate_schedule": {
            "reduce_on_plateau": true,
            "reduction_factor": 0.5,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 5,
            "plateau_cooldown": 3,
            "min_learning_rate": 1e-08
        },
        "hard_keypoint_mining": {
            "online_mining": false,
            "hard_to_easy_ratio": 2.0,
            "min_hard_keypoints": 2,
            "max_hard_keypoints": null,
            "loss_scale": 5.0
        },
        "early_stopping": {
            "stop_training_on_plateau": true,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 10
        }
    },
    "outputs": {
        "save_outputs": true,
        "run_name": "241205_114804.multi_class_topdown.n=100",
        "run_name_prefix": "",
        "run_name_suffix": "",
        "runs_folder": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models",
        "tags": [
            ""
        ],
        "save_visualizations": true,
        "keep_viz_images": false,
        "zip_outputs": false,
        "log_to_csv": true,
        "checkpointing": {
            "initial_model": false,
            "best_model": true,
            "every_epoch": false,
            "latest_model": false,
            "final_model": false
        },
        "tensorboard": {
            "write_logs": false,
            "loss_frequency": "epoch",
            "architecture_graph": false,
            "profile_graph": false,
            "visualizations": true
        },
        "zmq": {
            "subscribe_to_controller": true,
            "controller_address": "tcp://127.0.0.1:9000",
            "controller_polling_timeout": 10,
            "publish_updates": true,
            "publish_address": "tcp://127.0.0.1:65094"
        }
    },
    "name": "",
    "description": "",
    "sleap_version": "1.4.1a2",
    "filename": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp5__ca1ee/241205_114804_training_job.json"
}

I get the error:

ValueError: Index 1 is not uniform
INFO:sleap.nn.training:Finished training loop. [5.3 min]
INFO:sleap.nn.training:Deleting visualization directory: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_115357.multi_class_topdown.n=100/viz
INFO:sleap.nn.training:Saving evaluation metrics to model folder...
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_115357.multi_class_topdown.n=100/viz/validation.*.png
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% ETA: -:--:-- ?
Traceback (most recent call last):
  File "/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-train", line 33, in <module>
    sys.exit(load_entry_point('sleap', 'console_scripts', 'sleap-train')())
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 2039, in main
    trainer.train()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 953, in train
    self.evaluate()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 961, in evaluate
    sleap.nn.evals.evaluate_model(
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/evals.py", line 744, in evaluate_model
    labels_pr: Labels = predictor.predict(labels_gt, make_labels=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 527, in predict
    self._make_labeled_frames_from_generator(generator, data)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4498, in _make_labeled_frames_from_generator
    for ex in generator:
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 437, in _predict_generator
    ex = process_batch(ex)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 400, in process_batch
    preds = self.inference_model.predict_on_batch(ex, numpy=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 1070, in predict_on_batch
    outs = super().predict_on_batch(data, **kwargs)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 2230, in predict_on_batch
    outputs = self.predict_function(iterator)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file1gx_yspn.py", line 15, in tf__predict_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function
    outputs = model.distribute_strategy.run(run_step, args=(data,))
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step
    outputs = model.predict_step(data)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
    return self(x, training=False)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_fileg7w3hbh_.py", line 46, in tf__call
    crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filesg8xnffl.py", line 39, in tf__call
    crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filex7dagfnb.py", line 42, in tf__crop_bboxes
    image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
ValueError: in user code:

    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1845, in predict_function  *
        return step_function(self, iterator)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step  **
        outputs = model.predict_step(data)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
        return self(x, training=False)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_fileg7w3hbh_.py", line 46, in tf__call
        crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filesg8xnffl.py", line 39, in tf__call
        crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filex7dagfnb.py", line 42, in tf__crop_bboxes
        image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]

    ValueError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).
    
    in user code:
    
        File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4142, in call  *
            crop_output = self.centroid_crop(example)
        File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler  **
            raise e.with_traceback(filtered_tb) from None
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filesg8xnffl.py", line 39, in tf__call
            crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filex7dagfnb.py", line 42, in tf__crop_bboxes
            image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
    
        ValueError: Exception encountered when calling layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth).
        
        in user code:
        
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 773, in call  *
                crops = sleap.nn.peak_finding.crop_bboxes(full_imgs, bboxes, crop_sample_inds)
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/peak_finding.py", line 173, in crop_bboxes  *
                image_height = tf.shape(images)[1]
        
            ValueError: Index 1 is not uniform
        
        
        Call arguments received by layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth):
          • example_gt={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}
    
    
    Call arguments received by layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel):
      • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}

UPDATE 10: When I re-run training using the loaded config from my original multiclass model with labeled frames from just the 1024 x 1024 video, no predictions anywhere, and predicting on random frames across both the 1024 x 1024 and 720 x 720 videos using this config:

terminal config
Using already trained model for centroid: models/241205_111620.centroid.n=100/training_config.json
Resetting monitor window.
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_120500.multi_class_topdown.n=100/viz/validation.*.png
Start training multi_class_topdown...
['sleap-train', '/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpwmyhcpvf/241205_120500_training_job.json', '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp', '--zmq', '--controller_port', '9000', '--publish_port', '65337', '--save_viz']
INFO:sleap.nn.training:Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit
INFO:sleap.nn.training:Training labels file: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp
INFO:sleap.nn.training:Training profile: /var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpwmyhcpvf/241205_120500_training_job.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
    "training_job_path": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpwmyhcpvf/241205_120500_training_job.json",
    "labels_path": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
    "video_paths": [
        ""
    ],
    "val_labels": null,
    "test_labels": null,
    "base_checkpoint": null,
    "tensorboard": false,
    "save_viz": true,
    "keep_viz": false,
    "zmq": true,
    "publish_port": 65337,
    "controller_port": 9000,
    "run_name": "",
    "prefix": "",
    "suffix": "",
    "cpu": false,
    "first_gpu": false,
    "last_gpu": false,
    "gpu": "auto"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
    "data": {
        "labels": {
            "training_labels": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
            "validation_labels": null,
            "validation_fraction": 0.1,
            "test_labels": null,
            "split_by_inds": false,
            "training_inds": [
                73,
                80,
                63,
                98,
                70,
                14,
                78,
                61,
                28,
                99,
                35,
                9,
                62,
                13,
                56,
                17,
                86,
                37,
                43,
                30,
                68,
                19,
                0,
                7,
                29,
                92,
                58,
                15,
                89,
                23,
                45,
                57,
                3,
                27,
                36,
                26,
                1,
                22,
                49,
                31,
                24,
                51,
                4,
                94,
                40,
                48,
                59,
                64,
                93,
                76,
                38,
                6,
                5,
                88,
                65,
                39,
                85,
                82,
                66,
                81,
                87,
                21,
                55,
                97,
                79,
                60,
                34,
                83,
                91,
                12,
                90,
                41,
                44,
                33,
                77,
                53,
                20,
                25,
                50,
                16,
                11,
                42,
                10,
                2,
                95,
                84,
                96,
                47,
                71,
                75
            ],
            "validation_inds": [
                52,
                46,
                67,
                69,
                18,
                8,
                74,
                72,
                54,
                32
            ],
            "test_inds": null,
            "search_path_hints": [
                "",
                "",
                "",
                ""
            ],
            "skeletons": []
        },
        "preprocessing": {
            "ensure_rgb": false,
            "ensure_grayscale": false,
            "imagenet_mode": null,
            "input_scaling": 1.0,
            "pad_to_stride": 16,
            "resize_and_pad_to_target": true,
            "target_height": 1024,
            "target_width": 1024
        },
        "instance_cropping": {
            "center_on_part": "abdomen",
            "crop_size": 144,
            "crop_size_detection_padding": 16
        }
    },
    "model": {
        "backbone": {
            "leap": null,
            "unet": {
                "stem_stride": null,
                "max_stride": 16,
                "output_stride": 2,
                "filters": 64,
                "filters_rate": 2.0,
                "middle_block": true,
                "up_interpolate": false,
                "stacks": 1
            },
            "hourglass": null,
            "resnet": null,
            "pretrained_encoder": null
        },
        "heads": {
            "single_instance": null,
            "centroid": null,
            "centered_instance": null,
            "multi_instance": null,
            "multi_class_bottomup": null,
            "multi_class_topdown": {
                "confmaps": {
                    "anchor_part": "abdomen",
                    "part_names": [
                        "head",
                        "thorax",
                        "abdomen",
                        "wingL",
                        "wingR",
                        "forelegL4",
                        "forelegR4",
                        "midlegL4",
                        "midlegR4",
                        "hindlegL4",
                        "hindlegR4",
                        "eyeL",
                        "eyeR"
                    ],
                    "sigma": 5.0,
                    "output_stride": 2,
                    "loss_weight": 1.0,
                    "offset_refinement": false
                },
                "class_vectors": {
                    "classes": [
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "1",
                        "2",
                        "2",
                        "1",
                        "1",
                        "2",
                        "2"
                    ],
                    "num_fc_layers": 3,
                    "num_fc_units": 64,
                    "global_pool": true,
                    "output_stride": 16,
                    "loss_weight": 1.0
                }
            }
        },
        "base_checkpoint": null
    },
    "optimization": {
        "preload_data": true,
        "augmentation_config": {
            "rotate": false,
            "rotation_min_angle": -180.0,
            "rotation_max_angle": 180.0,
            "translate": false,
            "translate_min": -5,
            "translate_max": 5,
            "scale": false,
            "scale_min": 0.9,
            "scale_max": 1.1,
            "uniform_noise": false,
            "uniform_noise_min_val": 0.0,
            "uniform_noise_max_val": 10.0,
            "gaussian_noise": false,
            "gaussian_noise_mean": 5.0,
            "gaussian_noise_stddev": 1.0,
            "contrast": false,
            "contrast_min_gamma": 0.5,
            "contrast_max_gamma": 2.0,
            "brightness": false,
            "brightness_min_val": 0.0,
            "brightness_max_val": 10.0,
            "random_crop": false,
            "random_crop_height": 256,
            "random_crop_width": 256,
            "random_flip": false,
            "flip_horizontal": false
        },
        "online_shuffling": true,
        "shuffle_buffer_size": 128,
        "prefetch": true,
        "batch_size": 32,
        "batches_per_epoch": 200,
        "min_batches_per_epoch": 200,
        "val_batches_per_epoch": 10,
        "min_val_batches_per_epoch": 10,
        "epochs": 2,
        "optimizer": "adam",
        "initial_learning_rate": 0.0001,
        "learning_rate_schedule": {
            "reduce_on_plateau": true,
            "reduction_factor": 0.5,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 5,
            "plateau_cooldown": 3,
            "min_learning_rate": 1e-08
        },
        "hard_keypoint_mining": {
            "online_mining": false,
            "hard_to_easy_ratio": 2.0,
            "min_hard_keypoints": 2,
            "max_hard_keypoints": null,
            "loss_scale": 5.0
        },
        "early_stopping": {
            "stop_training_on_plateau": true,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 10
        }
    },
    "outputs": {
        "save_outputs": true,
        "run_name": "241205_120500.multi_class_topdown.n=100",
        "run_name_prefix": "",
        "run_name_suffix": "",
        "runs_folder": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models",
        "tags": [
            ""
        ],
        "save_visualizations": true,
        "keep_viz_images": false,
        "zip_outputs": false,
        "log_to_csv": true,
        "checkpointing": {
            "initial_model": false,
            "best_model": true,
            "every_epoch": false,
            "latest_model": false,
            "final_model": false
        },
        "tensorboard": {
            "write_logs": false,
            "loss_frequency": "epoch",
            "architecture_graph": false,
            "profile_graph": false,
            "visualizations": true
        },
        "zmq": {
            "subscribe_to_controller": true,
            "controller_address": "tcp://127.0.0.1:9000",
            "controller_polling_timeout": 10,
            "publish_updates": true,
            "publish_address": "tcp://127.0.0.1:65337"
        }
    },
    "name": "",
    "description": "",
    "sleap_version": "1.4.1a2",
    "filename": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmpwmyhcpvf/241205_120500_training_job.json"
}

I get an error

ValueError: Index 1 is not uniform
INFO:sleap.nn.training:Finished training loop. [5.0 min]
INFO:sleap.nn.training:Deleting visualization directory: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_120500.multi_class_topdown.n=100/viz
INFO:sleap.nn.training:Saving evaluation metrics to model folder...
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_120500.multi_class_topdown.n=100/viz/validation.*.png
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% ETA: -:--:-- ?
Traceback (most recent call last):
  File "/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-train", line 33, in <module>
    sys.exit(load_entry_point('sleap', 'console_scripts', 'sleap-train')())
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 2039, in main
    trainer.train()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 953, in train
    self.evaluate()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 961, in evaluate
    sleap.nn.evals.evaluate_model(
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/evals.py", line 744, in evaluate_model
    labels_pr: Labels = predictor.predict(labels_gt, make_labels=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 527, in predict
    self._make_labeled_frames_from_generator(generator, data)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4498, in _make_labeled_frames_from_generator
    for ex in generator:
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 437, in _predict_generator
    ex = process_batch(ex)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 400, in process_batch
    preds = self.inference_model.predict_on_batch(ex, numpy=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 1070, in predict_on_batch
    outs = super().predict_on_batch(data, **kwargs)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 2230, in predict_on_batch
    outputs = self.predict_function(iterator)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filei1dpo351.py", line 15, in tf__predict_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function
    outputs = model.distribute_strategy.run(run_step, args=(data,))
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step
    outputs = model.predict_step(data)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
    return self(x, training=False)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filetqjrposf.py", line 46, in tf__call
    crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filedlhw3lt5.py", line 39, in tf__call
    crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file64ia9w2d.py", line 42, in tf__crop_bboxes
    image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
ValueError: in user code:

    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1845, in predict_function  *
        return step_function(self, iterator)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step  **
        outputs = model.predict_step(data)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
        return self(x, training=False)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filetqjrposf.py", line 46, in tf__call
        crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filedlhw3lt5.py", line 39, in tf__call
        crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file64ia9w2d.py", line 42, in tf__crop_bboxes
        image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]

    ValueError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).
    
    in user code:
    
        File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4142, in call  *
            crop_output = self.centroid_crop(example)
        File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler  **
            raise e.with_traceback(filtered_tb) from None
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_filedlhw3lt5.py", line 39, in tf__call
            crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file64ia9w2d.py", line 42, in tf__crop_bboxes
            image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
    
        ValueError: Exception encountered when calling layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth).
        
        in user code:
        
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 773, in call  *
                crops = sleap.nn.peak_finding.crop_bboxes(full_imgs, bboxes, crop_sample_inds)
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/peak_finding.py", line 173, in crop_bboxes  *
                image_height = tf.shape(images)[1]
        
            ValueError: Index 1 is not uniform
        
        
        Call arguments received by layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth):
          • example_gt={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}
    
    
    Call arguments received by layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel):
      • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}

UPDATE 11: When I re-run training using the loaded config from my original multiclass model with labeled frames from just the 1024 x 1024 video, no predictions anywhere, unused tracks deleted, and predicting on random frames across both the 1024 x 1024 and 720 x 720 videos using this config:

terminal config
Using already trained model for centroid: models/241205_111620.centroid.n=100/training_config.json
Resetting monitor window.
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_121559.multi_class_topdown.n=100/viz/validation.*.png
Start training multi_class_topdown...
['sleap-train', '/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmplo1rplsj/241205_121600_training_job.json', '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp', '--zmq', '--controller_port', '9000', '--publish_port', '65393', '--save_viz']
INFO:sleap.nn.training:Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit
INFO:sleap.nn.training:Training labels file: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp
INFO:sleap.nn.training:Training profile: /var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmplo1rplsj/241205_121600_training_job.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
    "training_job_path": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmplo1rplsj/241205_121600_training_job.json",
    "labels_path": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
    "video_paths": [
        ""
    ],
    "val_labels": null,
    "test_labels": null,
    "base_checkpoint": null,
    "tensorboard": false,
    "save_viz": true,
    "keep_viz": false,
    "zmq": true,
    "publish_port": 65393,
    "controller_port": 9000,
    "run_name": "",
    "prefix": "",
    "suffix": "",
    "cpu": false,
    "first_gpu": false,
    "last_gpu": false,
    "gpu": "auto"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
    "data": {
        "labels": {
            "training_labels": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
            "validation_labels": null,
            "validation_fraction": 0.1,
            "test_labels": null,
            "split_by_inds": false,
            "training_inds": [
                8,
                28,
                12,
                75,
                95,
                33,
                11,
                50,
                3,
                18,
                64,
                49,
                89,
                2,
                88,
                34,
                38,
                14,
                85,
                93,
                22,
                92,
                56,
                6,
                27,
                7,
                68,
                77,
                87,
                43,
                91,
                45,
                0,
                59,
                15,
                71,
                60,
                1,
                54,
                70,
                36,
                83,
                82,
                4,
                39,
                96,
                90,
                78,
                53,
                17,
                86,
                51,
                29,
                55,
                66,
                61,
                76,
                13,
                26,
                67,
                74,
                31,
                62,
                63,
                84,
                80,
                94,
                42,
                40,
                81,
                99,
                41,
                48,
                98,
                5,
                47,
                58,
                57,
                72,
                37,
                10,
                97,
                44,
                21,
                79,
                52,
                19,
                65,
                20,
                25
            ],
            "validation_inds": [
                46,
                24,
                16,
                69,
                73,
                9,
                35,
                32,
                23,
                30
            ],
            "test_inds": null,
            "search_path_hints": [
                "",
                "",
                "",
                "",
                ""
            ],
            "skeletons": []
        },
        "preprocessing": {
            "ensure_rgb": false,
            "ensure_grayscale": false,
            "imagenet_mode": null,
            "input_scaling": 1.0,
            "pad_to_stride": 16,
            "resize_and_pad_to_target": true,
            "target_height": 1024,
            "target_width": 1024
        },
        "instance_cropping": {
            "center_on_part": "abdomen",
            "crop_size": 144,
            "crop_size_detection_padding": 16
        }
    },
    "model": {
        "backbone": {
            "leap": null,
            "unet": {
                "stem_stride": null,
                "max_stride": 16,
                "output_stride": 2,
                "filters": 64,
                "filters_rate": 2.0,
                "middle_block": true,
                "up_interpolate": false,
                "stacks": 1
            },
            "hourglass": null,
            "resnet": null,
            "pretrained_encoder": null
        },
        "heads": {
            "single_instance": null,
            "centroid": null,
            "centered_instance": null,
            "multi_instance": null,
            "multi_class_bottomup": null,
            "multi_class_topdown": {
                "confmaps": {
                    "anchor_part": "abdomen",
                    "part_names": [
                        "head",
                        "thorax",
                        "abdomen",
                        "wingL",
                        "wingR",
                        "forelegL4",
                        "forelegR4",
                        "midlegL4",
                        "midlegR4",
                        "hindlegL4",
                        "hindlegR4",
                        "eyeL",
                        "eyeR"
                    ],
                    "sigma": 5.0,
                    "output_stride": 2,
                    "loss_weight": 1.0,
                    "offset_refinement": false
                },
                "class_vectors": {
                    "classes": [
                        "1",
                        "2"
                    ],
                    "num_fc_layers": 3,
                    "num_fc_units": 64,
                    "global_pool": true,
                    "output_stride": 16,
                    "loss_weight": 1.0
                }
            }
        },
        "base_checkpoint": null
    },
    "optimization": {
        "preload_data": true,
        "augmentation_config": {
            "rotate": false,
            "rotation_min_angle": -180.0,
            "rotation_max_angle": 180.0,
            "translate": false,
            "translate_min": -5,
            "translate_max": 5,
            "scale": false,
            "scale_min": 0.9,
            "scale_max": 1.1,
            "uniform_noise": false,
            "uniform_noise_min_val": 0.0,
            "uniform_noise_max_val": 10.0,
            "gaussian_noise": false,
            "gaussian_noise_mean": 5.0,
            "gaussian_noise_stddev": 1.0,
            "contrast": false,
            "contrast_min_gamma": 0.5,
            "contrast_max_gamma": 2.0,
            "brightness": false,
            "brightness_min_val": 0.0,
            "brightness_max_val": 10.0,
            "random_crop": false,
            "random_crop_height": 256,
            "random_crop_width": 256,
            "random_flip": false,
            "flip_horizontal": false
        },
        "online_shuffling": true,
        "shuffle_buffer_size": 128,
        "prefetch": true,
        "batch_size": 32,
        "batches_per_epoch": 200,
        "min_batches_per_epoch": 200,
        "val_batches_per_epoch": 10,
        "min_val_batches_per_epoch": 10,
        "epochs": 2,
        "optimizer": "adam",
        "initial_learning_rate": 0.0001,
        "learning_rate_schedule": {
            "reduce_on_plateau": true,
            "reduction_factor": 0.5,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 5,
            "plateau_cooldown": 3,
            "min_learning_rate": 1e-08
        },
        "hard_keypoint_mining": {
            "online_mining": false,
            "hard_to_easy_ratio": 2.0,
            "min_hard_keypoints": 2,
            "max_hard_keypoints": null,
            "loss_scale": 5.0
        },
        "early_stopping": {
            "stop_training_on_plateau": true,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 10
        }
    },
    "outputs": {
        "save_outputs": true,
        "run_name": "241205_121559.multi_class_topdown.n=100",
        "run_name_prefix": "",
        "run_name_suffix": "",
        "runs_folder": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models",
        "tags": [
            ""
        ],
        "save_visualizations": true,
        "keep_viz_images": false,
        "zip_outputs": false,
        "log_to_csv": true,
        "checkpointing": {
            "initial_model": false,
            "best_model": true,
            "every_epoch": false,
            "latest_model": false,
            "final_model": false
        },
        "tensorboard": {
            "write_logs": false,
            "loss_frequency": "epoch",
            "architecture_graph": false,
            "profile_graph": false,
            "visualizations": true
        },
        "zmq": {
            "subscribe_to_controller": true,
            "controller_address": "tcp://127.0.0.1:9000",
            "controller_polling_timeout": 10,
            "publish_updates": true,
            "publish_address": "tcp://127.0.0.1:65393"
        }
    },
    "name": "",
    "description": "",
    "sleap_version": "1.4.1a2",
    "filename": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmplo1rplsj/241205_121600_training_job.json"
}

I get the error:

ValueError: Index 1 is not uniform
INFO:sleap.nn.training:Finished training loop. [5.0 min]
INFO:sleap.nn.training:Deleting visualization directory: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_121559.multi_class_topdown.n=100/viz
INFO:sleap.nn.training:Saving evaluation metrics to model folder...
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% ETA: -:--:-- ?Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_121559.multi_class_topdown.n=100/viz/validation.*.png
Predicting... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% ETA: -:--:-- ?
Traceback (most recent call last):
  File "/Users/liezlmaree/micromamba/envs/sleap/bin/sleap-train", line 33, in <module>
    sys.exit(load_entry_point('sleap', 'console_scripts', 'sleap-train')())
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 2039, in main
    trainer.train()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 953, in train
    self.evaluate()
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/training.py", line 961, in evaluate
    sleap.nn.evals.evaluate_model(
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/evals.py", line 744, in evaluate_model
    labels_pr: Labels = predictor.predict(labels_gt, make_labels=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 527, in predict
    self._make_labeled_frames_from_generator(generator, data)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4498, in _make_labeled_frames_from_generator
    for ex in generator:
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 437, in _predict_generator
    ex = process_batch(ex)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 400, in process_batch
    preds = self.inference_model.predict_on_batch(ex, numpy=True)
  File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 1070, in predict_on_batch
    outs = super().predict_on_batch(data, **kwargs)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 2230, in predict_on_batch
    outputs = self.predict_function(iterator)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_fileqt_1x3ig.py", line 15, in tf__predict_function
    retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function
    outputs = model.distribute_strategy.run(run_step, args=(data,))
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step
    outputs = model.predict_step(data)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
    return self(x, training=False)
  File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file3f56nm6y.py", line 46, in tf__call
    crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file2ai9n16_.py", line 39, in tf__call
    crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
  File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file_lbqcxgg.py", line 42, in tf__crop_bboxes
    image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
ValueError: in user code:

    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1845, in predict_function  *
        return step_function(self, iterator)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1834, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1823, in run_step  **
        outputs = model.predict_step(data)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/engine/training.py", line 1791, in predict_step
        return self(x, training=False)
    File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file3f56nm6y.py", line 46, in tf__call
        crop_output = ag__.converted_call(ag__.ld(self).centroid_crop, (ag__.ld(example),), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file2ai9n16_.py", line 39, in tf__call
        crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
    File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file_lbqcxgg.py", line 42, in tf__crop_bboxes
        image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]

    ValueError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).
    
    in user code:
    
        File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 4142, in call  *
            crop_output = self.centroid_crop(example)
        File "/Users/liezlmaree/micromamba/envs/sleap/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler  **
            raise e.with_traceback(filtered_tb) from None
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file2ai9n16_.py", line 39, in tf__call
            crops = ag__.converted_call(ag__.ld(sleap).nn.peak_finding.crop_bboxes, (ag__.ld(full_imgs), ag__.ld(bboxes), ag__.ld(crop_sample_inds)), None, fscope)
        File "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/__autograph_generated_file_lbqcxgg.py", line 42, in tf__crop_bboxes
            image_height = ag__.converted_call(ag__.ld(tf).shape, (ag__.ld(images),), None, fscope)[1]
    
        ValueError: Exception encountered when calling layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth).
        
        in user code:
        
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/inference.py", line 773, in call  *
                crops = sleap.nn.peak_finding.crop_bboxes(full_imgs, bboxes, crop_sample_inds)
            File "/Users/liezlmaree/Projects/sleap/sleap/nn/peak_finding.py", line 173, in crop_bboxes  *
                image_height = tf.shape(images)[1]
        
            ValueError: Index 1 is not uniform
        
        
        Call arguments received by layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth):
          • example_gt={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}
    
    
    Call arguments received by layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel):
      • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}

UPDATE 12: When I re-run training using the loaded config from my original multiclass model with labeled frames from just the 1024 x 1024 video, no predictions anywhere, unused tracks deleted, 720 x 270 video removed, and predicting on suggested frames using this config:

terminal config
Using already trained model for centroid: models/241205_111620.centroid.n=100/training_config.json
Resetting monitor window.
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_122521.multi_class_topdown.n=100/viz/validation.*.png
Start training multi_class_topdown...
['sleap-train', '/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmphp7_svop/241205_122521_training_job.json', '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp', '--zmq', '--controller_port', '9000', '--publish_port', '65444', '--save_viz']
INFO:sleap.nn.training:Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit
INFO:sleap.nn.training:Training labels file: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp
INFO:sleap.nn.training:Training profile: /var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmphp7_svop/241205_122521_training_job.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
    "training_job_path": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmphp7_svop/241205_122521_training_job.json",
    "labels_path": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
    "video_paths": [
        ""
    ],
    "val_labels": null,
    "test_labels": null,
    "base_checkpoint": null,
    "tensorboard": false,
    "save_viz": true,
    "keep_viz": false,
    "zmq": true,
    "publish_port": 65444,
    "controller_port": 9000,
    "run_name": "",
    "prefix": "",
    "suffix": "",
    "cpu": false,
    "first_gpu": false,
    "last_gpu": false,
    "gpu": "auto"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
    "data": {
        "labels": {
            "training_labels": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
            "validation_labels": null,
            "validation_fraction": 0.1,
            "test_labels": null,
            "split_by_inds": false,
            "training_inds": [
                47,
                85,
                99,
                76,
                82,
                65,
                37,
                33,
                74,
                30,
                49,
                93,
                78,
                2,
                41,
                59,
                56,
                77,
                63,
                75,
                80,
                46,
                67,
                68,
                25,
                9,
                79,
                45,
                21,
                12,
                62,
                23,
                72,
                19,
                69,
                92,
                83,
                14,
                10,
                36,
                66,
                35,
                11,
                22,
                40,
                96,
                64,
                55,
                60,
                70,
                8,
                17,
                3,
                90,
                5,
                52,
                51,
                18,
                94,
                28,
                48,
                89,
                27,
                42,
                6,
                54,
                24,
                15,
                86,
                87,
                34,
                20,
                73,
                91,
                26,
                43,
                16,
                57,
                81,
                88,
                4,
                53,
                50,
                7,
                71,
                84,
                29,
                1,
                39,
                98
            ],
            "validation_inds": [
                44,
                97,
                13,
                31,
                61,
                38,
                95,
                32,
                0,
                58
            ],
            "test_inds": null,
            "search_path_hints": [
                "",
                "",
                "",
                "",
                "",
                ""
            ],
            "skeletons": []
        },
        "preprocessing": {
            "ensure_rgb": false,
            "ensure_grayscale": false,
            "imagenet_mode": null,
            "input_scaling": 1.0,
            "pad_to_stride": 16,
            "resize_and_pad_to_target": true,
            "target_height": 1024,
            "target_width": 1024
        },
        "instance_cropping": {
            "center_on_part": "abdomen",
            "crop_size": 144,
            "crop_size_detection_padding": 16
        }
    },
    "model": {
        "backbone": {
            "leap": null,
            "unet": {
                "stem_stride": null,
                "max_stride": 16,
                "output_stride": 2,
                "filters": 64,
                "filters_rate": 2.0,
                "middle_block": true,
                "up_interpolate": false,
                "stacks": 1
            },
            "hourglass": null,
            "resnet": null,
            "pretrained_encoder": null
        },
        "heads": {
            "single_instance": null,
            "centroid": null,
            "centered_instance": null,
            "multi_instance": null,
            "multi_class_bottomup": null,
            "multi_class_topdown": {
                "confmaps": {
                    "anchor_part": "abdomen",
                    "part_names": [
                        "head",
                        "thorax",
                        "abdomen",
                        "wingL",
                        "wingR",
                        "forelegL4",
                        "forelegR4",
                        "midlegL4",
                        "midlegR4",
                        "hindlegL4",
                        "hindlegR4",
                        "eyeL",
                        "eyeR"
                    ],
                    "sigma": 5.0,
                    "output_stride": 2,
                    "loss_weight": 1.0,
                    "offset_refinement": false
                },
                "class_vectors": {
                    "classes": [
                        "1",
                        "2"
                    ],
                    "num_fc_layers": 3,
                    "num_fc_units": 64,
                    "global_pool": true,
                    "output_stride": 16,
                    "loss_weight": 1.0
                }
            }
        },
        "base_checkpoint": null
    },
    "optimization": {
        "preload_data": true,
        "augmentation_config": {
            "rotate": false,
            "rotation_min_angle": -180.0,
            "rotation_max_angle": 180.0,
            "translate": false,
            "translate_min": -5,
            "translate_max": 5,
            "scale": false,
            "scale_min": 0.9,
            "scale_max": 1.1,
            "uniform_noise": false,
            "uniform_noise_min_val": 0.0,
            "uniform_noise_max_val": 10.0,
            "gaussian_noise": false,
            "gaussian_noise_mean": 5.0,
            "gaussian_noise_stddev": 1.0,
            "contrast": false,
            "contrast_min_gamma": 0.5,
            "contrast_max_gamma": 2.0,
            "brightness": false,
            "brightness_min_val": 0.0,
            "brightness_max_val": 10.0,
            "random_crop": false,
            "random_crop_height": 256,
            "random_crop_width": 256,
            "random_flip": false,
            "flip_horizontal": false
        },
        "online_shuffling": true,
        "shuffle_buffer_size": 128,
        "prefetch": true,
        "batch_size": 32,
        "batches_per_epoch": 200,
        "min_batches_per_epoch": 200,
        "val_batches_per_epoch": 10,
        "min_val_batches_per_epoch": 10,
        "epochs": 2,
        "optimizer": "adam",
        "initial_learning_rate": 0.0001,
        "learning_rate_schedule": {
            "reduce_on_plateau": true,
            "reduction_factor": 0.5,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 5,
            "plateau_cooldown": 3,
            "min_learning_rate": 1e-08
        },
        "hard_keypoint_mining": {
            "online_mining": false,
            "hard_to_easy_ratio": 2.0,
            "min_hard_keypoints": 2,
            "max_hard_keypoints": null,
            "loss_scale": 5.0
        },
        "early_stopping": {
            "stop_training_on_plateau": true,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 10
        }
    },
    "outputs": {
        "save_outputs": true,
        "run_name": "241205_122521.multi_class_topdown.n=100",
        "run_name_prefix": "",
        "run_name_suffix": "",
        "runs_folder": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models",
        "tags": [
            ""
        ],
        "save_visualizations": true,
        "keep_viz_images": false,
        "zip_outputs": false,
        "log_to_csv": true,
        "checkpointing": {
            "initial_model": false,
            "best_model": true,
            "every_epoch": false,
            "latest_model": false,
            "final_model": false
        },
        "tensorboard": {
            "write_logs": false,
            "loss_frequency": "epoch",
            "architecture_graph": false,
            "profile_graph": false,
            "visualizations": true
        },
        "zmq": {
            "subscribe_to_controller": true,
            "controller_address": "tcp://127.0.0.1:9000",
            "controller_polling_timeout": 10,
            "publish_updates": true,
            "publish_address": "tcp://127.0.0.1:65444"
        }
    },
    "name": "",
    "description": "",
    "sleap_version": "1.4.1a2",
    "filename": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmphp7_svop/241205_122521_training_job.json"
}

I get a success!

Predicted frames: 20/20
Finished training multi_class_topdown.
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --only-suggested-frames -m models/241205_111620.centroid.n=100/training_config.json -m /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_122521.multi_class_topdown.n=100 --controller_port 9000 --publish_port 65444 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_123045.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 12:30:51.981540
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
2024-12-05 12:30:52.198601: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 12:30:52.198796: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   'models': [
│   │   'models/241205_111620.centroid.n=100/training_config.json',
│   │   '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_122521.multi_class_topdown.n=100'
│   ],
│   'frames': '',
│   'only_labeled_frames': False,
│   'only_suggested_frames': True,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_123045.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
│   'tracking.tracker': None,
2024-12-05 12:30:53.383260: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.max_tracking': None,
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,
│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}

INFO:sleap.nn.inference:Failed to query GPU memory from nvidia-smi. Defaulting to first GPU.
Metal device set to: Apple M2 Pro
2024-12-05 12:30:55.631559: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
2024-12-05 12:30:55.700485: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -55 } dim { size: -56 } dim { size: -57 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -15 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" model: "0" num_cores: 10 environment { key: "cpu_instruction_set" value: "ARM NEON" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 16384 l2_cache_size: 524288 l3_cache_size: 524288 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: -58 } dim { size: -59 } dim { size: 1 } } }
2024-12-05 12:30:55.700694: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_UINT8 } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_UINT8 shape { dim { size: 4 } dim { size: 1024 } dim { size: 1024 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -15 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" model: "0" num_cores: 10 environment { key: "cpu_instruction_set" value: "ARM NEON" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 16384 l2_cache_size: 524288 l3_cache_size: 524288 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: -66 } dim { size: -67 } dim { size: 1 } } }
2024-12-05 12:30:55.703654: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -105 } dim { size: -106 } dim { size: -107 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -26 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -26 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" model: "0" num_cores: 10 environment { key: "cpu_instruction_set" value: "ARM NEON" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 16384 l2_cache_size: 524288 l3_cache_size: 524288 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -26 } dim { size: -109 } dim { size: -110 } dim { size: 1 } } }
Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit

System:
GPUs: 1/1 available
  Device: /physical_device:GPU:0
         Available: True
       Initialized: False
     Memory growth: True

Finished inference at: 2024-12-05 12:30:57.287649
Total runtime: 5.306118011474609 secs
Predicted frames: 20/20
Provenance:
{
│   'model_paths': [
│   │   'models/241205_111620.centroid.n=100/training_config.json',
│   │   '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_122521.multi_class_topdown.n=100/training_config.json'
│   ],
│   'predictor': 'TopDownMultiClassPredictor',
│   'sleap_version': '1.4.1a2',
Process return code: 0

UPDATE 13: When I re-run training using the loaded config from my original multiclass model with labeled frames from just the 1024 x 1024 video, another 1024 x 1024 video added, and predicting on suggested frames from initial 1024 x 1024 video only using this config:

terminal config
Using already trained model for centroid: models/241205_111620.centroid.n=100/training_config.json
Resetting monitor window.
Polling: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_140858.multi_class_topdown.n=100/viz/validation.*.png
Start training multi_class_topdown...
['sleap-train', '/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp65bu85aj/241205_140859_training_job.json', '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp', '--zmq', '--controller_port', '9000', '--publish_port', '49369', '--save_viz']
INFO:sleap.nn.training:Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit
INFO:sleap.nn.training:Training labels file: /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp
INFO:sleap.nn.training:Training profile: /var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp65bu85aj/241205_140859_training_job.json
INFO:sleap.nn.training:
INFO:sleap.nn.training:Arguments:
INFO:sleap.nn.training:{
    "training_job_path": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp65bu85aj/241205_140859_training_job.json",
    "labels_path": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
    "video_paths": [
        ""
    ],
    "val_labels": null,
    "test_labels": null,
    "base_checkpoint": null,
    "tensorboard": false,
    "save_viz": true,
    "keep_viz": false,
    "zmq": true,
    "publish_port": 49369,
    "controller_port": 9000,
    "run_name": "",
    "prefix": "",
    "suffix": "",
    "cpu": false,
    "first_gpu": false,
    "last_gpu": false,
    "gpu": "auto"
}
INFO:sleap.nn.training:
INFO:sleap.nn.training:Training job:
INFO:sleap.nn.training:{
    "data": {
        "labels": {
            "training_labels": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp",
            "validation_labels": null,
            "validation_fraction": 0.1,
            "test_labels": null,
            "split_by_inds": false,
            "training_inds": [
                34,
                38,
                52,
                66,
                48,
                28,
                87,
                44,
                90,
                75,
                6,
                22,
                72,
                39,
                24,
                71,
                77,
                32,
                1,
                18,
                27,
                0,
                31,
                40,
                3,
                33,
                73,
                94,
                78,
                20,
                21,
                11,
                30,
                13,
                79,
                99,
                36,
                63,
                29,
                4,
                12,
                49,
                55,
                35,
                5,
                23,
                14,
                80,
                60,
                91,
                19,
                42,
                69,
                65,
                64,
                9,
                8,
                2,
                10,
                83,
                81,
                97,
                98,
                54,
                7,
                85,
                82,
                43,
                37,
                17,
                26,
                95,
                86,
                67,
                16,
                47,
                76,
                88,
                92,
                57,
                53,
                58,
                51,
                46,
                59,
                56,
                84,
                74,
                62,
                61
            ],
            "validation_inds": [
                70,
                96,
                15,
                50,
                93,
                25,
                89,
                68,
                41,
                45
            ],
            "test_inds": null,
            "search_path_hints": [
                "",
                "",
                "",
                "",
                "",
                "",
                ""
            ],
            "skeletons": []
        },
        "preprocessing": {
            "ensure_rgb": false,
            "ensure_grayscale": false,
            "imagenet_mode": null,
            "input_scaling": 1.0,
            "pad_to_stride": 16,
            "resize_and_pad_to_target": true,
            "target_height": 1024,
            "target_width": 1024
        },
        "instance_cropping": {
            "center_on_part": "abdomen",
            "crop_size": 144,
            "crop_size_detection_padding": 16
        }
    },
    "model": {
        "backbone": {
            "leap": null,
            "unet": {
                "stem_stride": null,
                "max_stride": 16,
                "output_stride": 2,
                "filters": 64,
                "filters_rate": 2.0,
                "middle_block": true,
                "up_interpolate": false,
                "stacks": 1
            },
            "hourglass": null,
            "resnet": null,
            "pretrained_encoder": null
        },
        "heads": {
            "single_instance": null,
            "centroid": null,
            "centered_instance": null,
            "multi_instance": null,
            "multi_class_bottomup": null,
            "multi_class_topdown": {
                "confmaps": {
                    "anchor_part": "abdomen",
                    "part_names": [
                        "head",
                        "thorax",
                        "abdomen",
                        "wingL",
                        "wingR",
                        "forelegL4",
                        "forelegR4",
                        "midlegL4",
                        "midlegR4",
                        "hindlegL4",
                        "hindlegR4",
                        "eyeL",
                        "eyeR"
                    ],
                    "sigma": 5.0,
                    "output_stride": 2,
                    "loss_weight": 1.0,
                    "offset_refinement": false
                },
                "class_vectors": {
                    "classes": [
                        "1",
                        "2",
                        "1",
                        "2"
                    ],
                    "num_fc_layers": 3,
                    "num_fc_units": 64,
                    "global_pool": true,
                    "output_stride": 16,
                    "loss_weight": 1.0
                }
            }
        },
        "base_checkpoint": null
    },
    "optimization": {
        "preload_data": true,
        "augmentation_config": {
            "rotate": false,
            "rotation_min_angle": -180.0,
            "rotation_max_angle": 180.0,
            "translate": false,
            "translate_min": -5,
            "translate_max": 5,
            "scale": false,
            "scale_min": 0.9,
            "scale_max": 1.1,
            "uniform_noise": false,
            "uniform_noise_min_val": 0.0,
            "uniform_noise_max_val": 10.0,
            "gaussian_noise": false,
            "gaussian_noise_mean": 5.0,
            "gaussian_noise_stddev": 1.0,
            "contrast": false,
            "contrast_min_gamma": 0.5,
            "contrast_max_gamma": 2.0,
            "brightness": false,
            "brightness_min_val": 0.0,
            "brightness_max_val": 10.0,
            "random_crop": false,
            "random_crop_height": 256,
            "random_crop_width": 256,
            "random_flip": false,
            "flip_horizontal": false
        },
        "online_shuffling": true,
        "shuffle_buffer_size": 128,
        "prefetch": true,
        "batch_size": 32,
        "batches_per_epoch": 200,
        "min_batches_per_epoch": 200,
        "val_batches_per_epoch": 10,
        "min_val_batches_per_epoch": 10,
        "epochs": 2,
        "optimizer": "adam",
        "initial_learning_rate": 0.0001,
        "learning_rate_schedule": {
            "reduce_on_plateau": true,
            "reduction_factor": 0.5,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 5,
            "plateau_cooldown": 3,
            "min_learning_rate": 1e-08
        },
        "hard_keypoint_mining": {
            "online_mining": false,
            "hard_to_easy_ratio": 2.0,
            "min_hard_keypoints": 2,
            "max_hard_keypoints": null,
            "loss_scale": 5.0
        },
        "early_stopping": {
            "stop_training_on_plateau": true,
            "plateau_min_delta": 1e-06,
            "plateau_patience": 10
        }
    },
    "outputs": {
        "save_outputs": true,
        "run_name": "241205_140858.multi_class_topdown.n=100",
        "run_name_prefix": "",
        "run_name_suffix": "",
        "runs_folder": "/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models",
        "tags": [
            ""
        ],
        "save_visualizations": true,
        "keep_viz_images": false,
        "zip_outputs": false,
        "log_to_csv": true,
        "checkpointing": {
            "initial_model": false,
            "best_model": true,
            "every_epoch": false,
            "latest_model": false,
            "final_model": false
        },
        "tensorboard": {
            "write_logs": false,
            "loss_frequency": "epoch",
            "architecture_graph": false,
            "profile_graph": false,
            "visualizations": true
        },
        "zmq": {
            "subscribe_to_controller": true,
            "controller_address": "tcp://127.0.0.1:9000",
            "controller_polling_timeout": 10,
            "publish_updates": true,
            "publish_address": "tcp://127.0.0.1:49369"
        }
    },
    "name": "",
    "description": "",
    "sleap_version": "1.4.1a2",
    "filename": "/var/folders/64/rjln6zpx7tlgwf8cqgvhm7fr0000gn/T/tmp65bu85aj/241205_140859_training_job.json"
}

I get a success!

terminal config
Finished training multi_class_topdown.
Command line call:
sleap-track /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp --only-suggested-frames -m models/241205_111620.centroid.n=100/training_config.json -m /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_140858.multi_class_topdown.n=100 --controller_port 9000 --publish_port 49369 -o /Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_141422.predictions.slp --verbosity json --no-empty-frames

Started inference at: 2024-12-05 14:14:28.611854
Args:
{
│   'data_path': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/2004.v004.slp',
│   'models': [
2024-12-05 14:14:28.834916: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-12-05 14:14:28.835068: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
│   │   'models/241205_111620.centroid.n=100/training_config.json',
│   │   '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_140858.multi_class_topdown.n=100'
│   ],
│   'frames': '',
│   'only_labeled_frames': False,
│   'only_suggested_frames': True,
│   'output': '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/predictions/2004.v004.slp.241205_141422.predictions.slp',
│   'no_empty_frames': True,
│   'verbosity': 'json',
│   'video.dataset': None,
│   'video.input_format': 'channels_last',
│   'video.index': '',
│   'cpu': False,
│   'first_gpu': False,
│   'last_gpu': False,
│   'gpu': 'auto',
│   'max_edge_length_ratio': 0.25,
│   'dist_penalty_weight': 1.0,
│   'batch_size': 4,
│   'open_in_gui': False,
│   'peak_threshold': 0.2,
│   'max_instances': None,
│   'tracking.tracker': None,
2024-12-05 14:14:30.029107: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
│   'tracking.max_tracking': None,
│   'tracking.max_tracks': None,
│   'tracking.target_instance_count': None,
│   'tracking.pre_cull_to_target': None,
│   'tracking.pre_cull_iou_threshold': None,
│   'tracking.post_connect_single_breaks': None,
│   'tracking.clean_instance_count': None,
│   'tracking.clean_iou_threshold': None,
│   'tracking.similarity': None,
│   'tracking.match': None,
│   'tracking.robust': None,
│   'tracking.track_window': None,
│   'tracking.min_new_track_points': None,
│   'tracking.min_match_points': None,
│   'tracking.img_scale': None,
│   'tracking.of_window_size': None,
│   'tracking.of_max_levels': None,
│   'tracking.save_shifted_instances': None,
│   'tracking.kf_node_indices': None,
│   'tracking.kf_init_frame_count': None,
│   'tracking.oks_errors': None,
│   'tracking.oks_score_weighting': None,
│   'tracking.oks_normalization': None
}

INFO:sleap.nn.inference:Failed to query GPU memory from nvidia-smi. Defaulting to first GPU.
Metal device set to: Apple M2 Pro
2024-12-05 14:14:32.276929: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.
2024-12-05 14:14:32.351012: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -55 } dim { size: -56 } dim { size: -57 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -15 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" model: "0" num_cores: 10 environment { key: "cpu_instruction_set" value: "ARM NEON" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 16384 l2_cache_size: 524288 l3_cache_size: 524288 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: -58 } dim { size: -59 } dim { size: 1 } } }
2024-12-05 14:14:32.351302: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_UINT8 } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_UINT8 shape { dim { size: 4 } dim { size: 1024 } dim { size: 1024 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -15 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" model: "0" num_cores: 10 environment { key: "cpu_instruction_set" value: "ARM NEON" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 16384 l2_cache_size: 524288 l3_cache_size: 524288 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -15 } dim { size: -66 } dim { size: -67 } dim { size: 1 } } }
2024-12-05 14:14:32.354519: W tensorflow/core/grappler/costs/op_level_cost_estimator.cc:690] Error in PredictCost() for the op: op: "CropAndResize" attr { key: "T" value { type: DT_FLOAT } } attr { key: "extrapolation_value" value { f: 0 } } attr { key: "method" value { s: "bilinear" } } inputs { dtype: DT_FLOAT shape { dim { size: -105 } dim { size: -106 } dim { size: -107 } dim { size: 1 } } } inputs { dtype: DT_FLOAT shape { dim { size: -26 } dim { size: 4 } } } inputs { dtype: DT_INT32 shape { dim { size: -26 } } } inputs { dtype: DT_INT32 shape { dim { size: 2 } } } device { type: "CPU" model: "0" num_cores: 10 environment { key: "cpu_instruction_set" value: "ARM NEON" } environment { key: "eigen" value: "3.4.90" } l1_cache_size: 16384 l2_cache_size: 524288 l3_cache_size: 524288 memory_size: 268435456 } outputs { dtype: DT_FLOAT shape { dim { size: -26 } dim { size: -109 } dim { size: -110 } dim { size: 1 } } }
Versions:
SLEAP: 1.4.1a2
TensorFlow: 2.9.2
Numpy: 1.24.4
Python: 3.9.20
OS: macOS-13.5-arm64-arm-64bit

System:
GPUs: 1/1 available
  Device: /physical_device:GPU:0
         Available: True
       Initialized: False
     Memory growth: True

Finished inference at: 2024-12-05 14:14:33.726798
Total runtime: 5.114958763122559 secs
Predicted frames: 20/20
Provenance:
{
│   'model_paths': [
│   │   'models/241205_111620.centroid.n=100/training_config.json',
│   │   '/Users/liezlmaree/Projects/sleap-datasets/drosophila-melanogaster-courtship/models/241205_140858.multi_class_topdown.n=100/training_config.json'
│   ],
│   'predictor': 'TopDownMultiClassPredictor',
│   'sleap_version': '1.4.1a2',
Process return code: 0

Summary
code

From the above updates, it seems that the problem is indeed the different sized videos and only becomes a problem during the inference ran immediately after training.

Thanks, Liezl

roomrys avatar Dec 05 '24 17:12 roomrys

Hi @Lauraschwarz,

I am still working on recreating the bug during training; however, from the above updates (namely UPDATE 3), I have found that inference alone does not like to be run on the suggested frames list (UPDATE C: when there are differently sized videos present).

Based on the results we have right now,

  1. "the model however is available to select when i want to run inference",
  2. "Running inference also does not crash either and i get predictions.",
  3. UPDATE 3
  4. "_predict_frames": "suggested frames (1539 total frames)",

, it sounds like prediction while in the actual training loop is doing fine, but the inference ran after the training finishes (determined by the "Predict On" dropdown in the Training GUI) results in an error. More specifically, the inference error occurs only when "Predict On" is set to "suggested frames".

~Are you able to verify/test this by running inferences using the model with~

  1. ~"Predict On" set to "suggested frames" (expecting an error) and also running inference with~
  2. ~"Predict On" set to anything but "suggested frames" (expecting no error). If that holds for inference only, then we can try the~
  3. ~same experiment in the training pipeline.~

UPDATE A: Although the above holds for inference only, we still get errors during training followed by inference (see UPDATE 9). UPDATE B: From UPDATE 11 and UPDATE 13, we confirm that the error only occurs when we have videos of different sizes present. The original hypothesis that it was an error with predicting on suggested frames may be because suggested frames takes in the slp file as the dataset (which mixes together all videos in the project) whereas other options take in the video as the dataset? To be confirmed....

Thanks, Liezl

roomrys avatar Dec 05 '24 19:12 roomrys

Hi Liezl!

absolute legend!!!!! thank you so much for this heap of troubleshooting! this is super useful and also helps me understand what might be going on. UPDATE 9 seems like it just really does not like the difference in frame size if i get that correctly?

i also noticed that in the training loop that crashes, when i look at the models folder it is missing the labels_pr.train.slp and the labels_val_train.slp files... (i hope this also helps.)

When i set the "predict on" to selected frames from my new videos (1100x1000 instead of 1030x1100) i get this error. i will re-run with trying to predict on only the old videos.... will update whether that works.. UPDATE: still crashes even if i set the predict on to nothing.

i will also tomorrow go and crop my new videos so that they are the same size to see whether that makes the errors go away... will update as well asap. (i am not super good at these things but i will try to be swift)

about the results we have got so far:

  1. model is available but is also missing two files (labels_pr.train.slp and the labels_val_train.slp)
  2. the predictions i get i suspect are not updated with further model training if that makes any sense at all. (even with more labelling for one specific set of frames, predicting on those frames does not get better or even change with furhter training...weird)
  3. i shall also run without predict on set to anything. and see what happens.....

thank you again for this mega effort! this is super helpful and i am sure we will find the problem soon enough :)

good night! and more updates tomorrow :)

Lauraschwarz avatar Dec 05 '24 21:12 Lauraschwarz

FINAL (??) UPDATE: I fixed it on my end!!!!

for me it seems to have been the difference in aspect ratio. i just went and adjusted my videos to the original size (rotated 90degrees and added 30px of black padding to the side did the trick) and it fixed the crashing! I now have a fully functional training again!

I really did not expect this to be such a problem that changes in the image size (video size) makes it crash. Maybe this is sth for future releases to be added?

SUMMARY:

  • Training was crashing when i re-trained my model after adding images that were a different Height and Width (1100x1000 instead of 1030x1100)
  • training only failed after the training loop had finished predicting using inference only and the "unfinished" model was giving predictions
  • rotating and adding a 30px padding to the top completely abolished this behaviour and now everything is working well again.

@roomrys THANK YOU for all your help helping me figure this out and fixing this. your help is greatly appreciated!

Lauraschwarz avatar Dec 06 '24 15:12 Lauraschwarz