VisDrone
VisDrone copied to clipboard
RAM out of memory
After I call the train_visdrone.py, the training is started. The RAM runs full after a short time and an exception is thrown. In train_visdrone.py:
model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=20, layers='heads')
Presumably, that's because of the big data set. With which command can the model be further trained by batch?
Hi! As far as I understand, the batch size already equals 1 (see Configurations: BATCH_SIZE) I'm not 100% sure, but I guess you get RAM out of memory just because of the size of the model. I managed to train head, using GPU: 12 vcpus, 112 GB memory, and it failed with 56 GB memory (1 epoch ~ 4 hours).
I'm wondering how many GB of RAM I need to fine-tune the whole model.