NETL-Automatic-Topic-Labelling-
NETL-Automatic-Topic-Labelling- copied to clipboard
Out of memory running pre-trained system even with no paralellization
I am running the pre-trained system to get the labels on my computer and it is getting out of memory. It has 16GB of RAM, 4GB of them are already in use, so it has 12GB exclusively for this purpose.
I saw the Known Issue and applied the steps mentioned in the README.md to run each process sequentially instead of in parallel in order to save memory, but anyways it gets out of memory.
# pool = mp.Pool(processes=cores)
# result = pool.map(get_labels, range(0,len(topic_list))
result=[]
for i in range(0,len(topic_list)):
result.append(get_labels(i))
Does anyone know which are the minimum hardware requirements to make this work? I couldn't find them anywhere. Is there any other way a part from the already mentioned to lower the memory use?
Thanks.
Hi @Trujillo94 How did you run the pretrained model ? which gensim version you used?