ProjE
ProjE copied to clipboard
Tensorflow version
Hi,
Which version of tensorflow and python is used in the project?
Thanks.
Hi I think it is Python3 and an older version of TensorFlow maybe 1.3.
Okay. And the command can be this? ./ProjE_softmax.py --dim 5 --batch 5 --data ./data/FB15k/ --eval_per 1 --worker 5 --eval_batch 5 --max_iter 5 --generator 5?
This should be fine.
You could start with the command I shared in the README, that should work.
It takes too long, so I thought I'll give smaller values. How many CPU cores are ideally needed? And do I need a GPU?
You don't need a GPU -- I ran this code on a 48 core machine. If you don't have that many cores, you can reduce the number of generators because each of them is a thread. I would suggest you enlarge batch size set max_iter to 1, and reduce generator to 1.
Which version of numpy do I use?
Sorry, I don't recall. Something released before 2017 would be fine.
What is final output of ProjE_softmax.py supposded to be? Given input is a knowledge graph (FB15k).
Its just the HITS and MR scores printed on the screen.
What about the completed knowledge graph?
No there is not such output.
On Tue, Apr 9, 2019 at 6:06 PM trishav96 [email protected] wrote:
What about the completed knowledge graph?
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/bxshi/ProjE/issues/11#issuecomment-481494414, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQQQ7wnMwX01klgFPn3edA68-nrfCz5ks5vfTkIgaJpZM4cjGj_ .
But the paper title is knowledge graph completion right?
Hey,
Could you explain in a few lines what exactly you are trying to do in the code?
Thanks, Trisha
On Tue, Apr 9, 2019, 6:42 PM bxshi [email protected] wrote:
No there is not such output.
On Tue, Apr 9, 2019 at 6:06 PM trishav96 [email protected] wrote:
What about the completed knowledge graph?
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/bxshi/ProjE/issues/11#issuecomment-481494414, or mute the thread < https://github.com/notifications/unsubscribe-auth/ABQQQ7wnMwX01klgFPn3edA68-nrfCz5ks5vfTkIgaJpZM4cjGj_
.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bxshi/ProjE/issues/11#issuecomment-481500824, or mute the thread https://github.com/notifications/unsubscribe-auth/AS_iWPU3ZClHc--Vznj0HnNeaumQ5SPCks5vfUGigaJpZM4cjGj_ .
Hi, could you tell me which lines have the variables that store the embeddings of the entities in the Proje_softmax.py file?
I’m out in the woods and dont have access to a computer... it should be named something like entities relations, etc.
On Thu, Apr 11, 2019 at 4:18 PM trishav96 [email protected] wrote:
Hi, could you tell me which lines have the variables that store the embeddings of the entities in the Proje_softmax.py file?
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/bxshi/ProjE/issues/11#issuecomment-482369501, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQQQ-on0XIJd0SeqeUbWVydvv63nva-ks5vf8LCgaJpZM4cjGj_ .
Are they tensorflow variables or normal variables?
self.__ent_embedding = tf.get_variable("ent_embedding", [self.__n_entity, embed_dim], initializer=tf.random_uniform_initializer(minval=-bound, maxval=bound, seed=345))
Is it this? (Line 187)
Yes!
On Thu, Apr 11, 2019 at 9:14 PM trishav96 [email protected] wrote:
self.__ent_embedding = tf.get_variable("ent_embedding", [self.__n_entity, embed_dim], initializer=tf.random_uniform_initializer(minval=-bound, maxval=bound, seed=345))
Is it this? (Line 187)
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/bxshi/ProjE/issues/11#issuecomment-482430797, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQQQ6Tuqaqy7iIrfYO7gaMszeWRGo_Yks5vgAg4gaJpZM4cjGj_ .
How to I read this? When I do print(self.__ent_embedding) it print this --> <tf.Variable 'ent_embedding:0' shape=(14951, 1) dtype=float32_ref> Tensor("ProjE_1/TopKV2_1:1", shape=(?, 14951), dtype=int32, device=/device:CPU:*)
You will need to session.run it. Print it directly just prints out the tensorflow def.
On Thu, Apr 11, 2019 at 9:27 PM trishav96 [email protected] wrote:
How to I read this? When I do print(self.__ent_embedding) it print this --> <tf.Variable 'ent_embedding:0' shape=(14951, 1) dtype=float32_ref> Tensor("ProjE_1/TopKV2_1:1", shape=(?, 14951), dtype=int32, device=/device:CPU:*)
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/bxshi/ProjE/issues/11#issuecomment-482432613, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQQQyyff1H98ApGE01SNtqg6rHTUDefks5vgAsXgaJpZM4cjGj_ .
I've tried doing it. It didn't work. Do you have a link I can follow to do it? Please provide it? It'll be really helpful.
It’s basic TensorFlow operations. It should return a numpy object. I don’t have any tutorial on hand unfortunately
On Thu, Apr 11, 2019 at 9:29 PM trishav96 [email protected] wrote:
I've tried doing it. It didn't work. Do you have a link I can follow to do it? Please provide it? It'll be really helpful.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/bxshi/ProjE/issues/11#issuecomment-482432932, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQQQ5ymnBiQPvkCtJall4XOT98vtukDks5vgAuZgaJpZM4cjGj_ .
Okay, I'll try searching it. Thanks.
Hi,
So I ran the code for a smaller dataset and on 8 cores with 20GB RAM. 1 generator and 2 workers. But the code doesn't end at all. It shows evaluation metrics for last iteration and is just hung there. Am I missing anything?
It may just take a very long time to finish I guess. I never experienced such a thing before.
/ProjE_softmax.py --dim 200 --batch 200 --data ./data/FB15k/ --eval_per 10 --worker 3 --eval_batch 500 --max_iter 10 --generator 1
This is the query I ran, I modified eval_per to 10 and max_iter to 10.
How long would it take? The dataset has ~2.5k triples.