NGNN icon indicating copy to clipboard operation
NGNN copied to clipboard

Can you upload this file fill_in_blank_1000_from_test_score.pkl?

Open davidsonic opened this issue 5 years ago • 10 comments

davidsonic avatar Sep 28 '19 03:09 davidsonic

+ 1, thank you for asking this!

mauricerupp avatar Oct 03 '19 11:10 mauricerupp

the core  generate fill in the blank test data by itself. this dataset is no longer need in the code cuizeyu2016 邮箱:[email protected] 签名由 网易邮箱大师 定制 On 10/03/2019 19:39, mauricerupp wrote: 1, thank you for asking this! — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

cyente avatar Oct 03 '19 13:10 cyente

the core generate fill in the blank test data by itself. this dataset is no longer need in the code

and the function is about line 114, in the file “load_data_multimodal.py”.

cyente avatar Oct 11 '19 12:10 cyente

the core generate fill in the blank test data by itself. this dataset is no longer need in the code

and the function is about line 114, in the file “load_data_multimodal.py”.2019-10-03 19:39:38>mauricerupp 写道:

1, thank you for asking this!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

cyente avatar Oct 11 '19 12:10 cyente

Does anyone know how to use load_fitb_data(index, batch_size, outfit_list) in load_data_multimodal.py to produce variables to replace :

read_file_fill = open('fill_in_blank_1000_from_test_score.pkl', 'rb')
test_image, test_graph, test_size = pickle.load(read_file_fill)

in main_score.py?

fuhailin avatar Oct 11 '19 13:10 fuhailin

Did anyone run this code successfully? I have so many confusions. still dont know how to generate fill in blank 1000.pkl

kenzoyan avatar Oct 15 '19 02:10 kenzoyan

Yes, I did:

  1. Download the pre-processed dataset from the authors (the one on Google Drive, around 7 GB)
  2. Download the "normal" dataset in order to get the text to the images (the one on Github, only a few MBs)
  3. Change all the folder paths in the files to your corresponding ones
  4. run "onehot_embedding.py" to create the textual features (The rest of the pre-processing was already done by the authors)
  5. run "main_multi_modal.py" to train. In the end of the file you can adjust the config of the network (Beta, d, T etc.), so the file "Config.py" is useless here. (6. If you want to train several instances in the for-loop, you need to reset the graph at the begining of the training. Just add "tf.reset_default_graph()" at the start of the function "cm_ggnn()"

With this setup, I could reproduce the results fairly well with the same accuracy as in the paper.

Cheers

mauricerupp avatar Oct 15 '19 08:10 mauricerupp

@mauricerupp Thank you so nuch for telling the details. Sorry for ask again, Do you know what the complicated loop is working in loading data? Because I am always loading the same picture feature into different position in image_pos[ ] again and again when changing to my own dataset? TIM截图20191019141236

kenzoyan avatar Oct 19 '19 06:10 kenzoyan

Thank you for your remind of point 6. and the config.py is useless now.
I think the normal data set'' you refer to is the image of all outfits in Polyvore. Since we give the preprocessing code from images to the feature vectors in use_inception_for_vec.py.''

I get these images directly from the outfit informations in the original version in step 3. you can find the url of all the images. there I am not sure whether the url could be used now. If you could not get them from the url. Plz contact me, i could transmit them to you directly.

2020-10-20 15:59:47"lijian-gcn" [email protected]写道:

Yes, I did:

Download the pre-processed dataset from the authors (the one on Google Drive, around 7 GB) Download the "normal" dataset in order to get the text to the images (the one on Github, only a few MBs) Change all the folder paths in the files to your corresponding ones run "onehot_embedding.py" to create the textual features (The rest of the pre-processing was already done by the authors) run "main_multi_modal.py" to train. In the end of the file you can adjust the config of the network (Beta, d, T etc.), so the file "Config.py" is useless here. (6. If you want to train several instances in the for-loop, you need to reset the graph at the begining of the training. Just add "tf.reset_default_graph()" at the start of the function "cm_ggnn()"

With this setup, I could reproduce the results fairly well with the same accuracy as in the paper.

Cheers

I want to know where you found the normal data set in the second step in github Thanks!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

cyente avatar Oct 20 '20 09:10 cyente

Yes, I did:

  1. Download the pre-processed dataset from the authors (the one on Google Drive, around 7 GB)
  2. Download the "normal" dataset in order to get the text to the images (the one on Github, only a few MBs)
  3. Change all the folder paths in the files to your corresponding ones
  4. run "onehot_embedding.py" to create the textual features (The rest of the pre-processing was already done by the authors)
  5. run "main_multi_modal.py" to train. In the end of the file you can adjust the config of the network (Beta, d, T etc.), so the file "Config.py" is useless here. (6. If you want to train several instances in the for-loop, you need to reset the graph at the begining of the training. Just add "tf.reset_default_graph()" at the start of the function "cm_ggnn()"

With this setup, I could reproduce the results fairly well with the same accuracy as in the paper.

Cheers

I run the main_multi_modal.py and get a trained model. But I don't know how to use the model to predict. I want to use the small dataset sample included in the folder NGNN/data/, so which code can I run to test?

surheaven avatar Sep 01 '21 03:09 surheaven