poet
poet copied to clipboard
Was the LMO dataset also trained for 50 epochs?
Was the LMO dataset also trained for 50 epochs?
No, we trained PoET for the LM-O dataset for around 150 epochs, as the dataset is way smaller than YCB-V.
Best, Thomas