tomotopy icon indicating copy to clipboard operation
tomotopy copied to clipboard

Loading a fairly large model takes a long time

Open erip opened this issue 1 year ago • 3 comments

I have a 6.5GB model trained on 10M docs to model 100 topics trained the usual way. I'm trying to load the model and I'm finding that load times are incredibly high. For reference, I've been monitoring top and my program has only loaded ~5.1GB of the 6.5GB model after 10 minutes.

I suspect this is because I used default save with full=True... Should I expect a model with full=False to load faster?

erip avatar Aug 01 '22 11:08 erip

Hi @erip Yes, the model saved with full=True argument has all parameters related to whole train, thus it may take a long time to re-load it. If you save the model with full=False, you cannot continue to train with this model, but you can load faster.

bab2min avatar Aug 07 '22 09:08 bab2min

Thanks very much, @bab2min! It seems like if the model is binarized it shouldn't take long to reload. I haven't looked at the details so sorry for the silly question, but does the model use numpy binarization under the hood? If so, it could be quick to deserialize even if full (though maybe I don't appreciate the complexity here).

erip avatar Aug 07 '22 23:08 erip

@erip Actually, the package doesn't use numpy binarization for loading & saving, but uses a custom serialization functions. And it is true that there are many features related to backward compatibility in the custom functions, so their process is somewhat inefficiently. I'll check if it can be improved or re-implement the loading & saving in the near future.

bab2min avatar Aug 08 '22 16:08 bab2min