AmpliGraph icon indicating copy to clipboard operation
AmpliGraph copied to clipboard

Continuous Training / Model tuning / Initialize with pre-trained Model

Open mhmgad opened this issue 6 years ago • 2 comments

I want to continue/resume training a pre-trained model using new triples. We may assume that there are no new entities or relations but new instants.

What is the best way to load the pre-trained model?

Or is it possible to implement something like? where continue_training=True is a flag to skip initialization?

import numpy as np
from ampligraph.latent_features import ComplEx

model = ComplEx(batches_count=1, seed=555, epochs=20, k=10)
X = np.array([['a', 'y', 'b'],
              ['b', 'y', 'a'],
              ['a', 'y', 'c'],
              ['c', 'y', 'a'],
              ['a', 'y', 'd'],
              ['c', 'y', 'd'],
              ['b', 'y', 'c'],
              ['f', 'y', 'e']])
model.fit(X)
print(model.get_embeddings(['f','e'], embedding_type='entity'))

X2 = np.array([['a', 'y', 'b'],
              ['c', 'y', 'd'],
              ['b', 'y', 'c'],
              ['f', 'y', 'm'],
              ['f', 'y', 'n'],
              ['f', 'y', 'a'],
              ['f', 'y', 'c'],
               ['f', 'z', 'c']])
model.fit(X2, continue_training=True)

mhmgad avatar Dec 05 '19 16:12 mhmgad

Quoting @lukostaz answer on slack for loading pre-trained model:

"In theory, you could write a customized Initializer to replace the stock Xavier, RandomNormal, etc: http://docs.ampligraph.org/en/1.2.0/ampligraph.latent_features.html#initializers http://docs.ampligraph.org/en/1.2.0/generated/ampligraph.latent_features.Initializer.html#ampligraph.latent_features.Initializer You could inject your pre-trained embeddings to the training loop each time you train the model again - you could even play around with the save/restore utilities: http://docs.ampligraph.org/en/1.2.0/ampligraph.utils.html#saving-restoring-models "

I think this is the clean solution but it is hard for someone not involved with the code base and may lead to inconsistencies that are hard to discover.

mhmgad avatar Dec 05 '19 16:12 mhmgad

Hi Luca, what is the status of this enhancement?

Thanks, Shaul

shaulz avatar Mar 02 '22 15:03 shaulz