OpenLRM
OpenLRM copied to clipboard
What resources to train from scratch
Hi,
Could you please provide some numbers around which GPUs, how many of them, and how many hours roughly were required to train these models?
Hi,
Plz refer to this issue https://github.com/3DTopia/OpenLRM/issues/2#issuecomment-1882590904. There isn't much differences for V1.1 models.
Plz also note that you can also use less resources, e.g. 8 A100 GPUs, by configuring the gradient accumulation steps in the config file.