personality-detection icon indicating copy to clipboard operation
personality-detection copied to clipboard

The program is not memory optimized.

Open JafferWilson opened this issue 8 years ago • 21 comments

Hello, I am trying to run your repository. I tried with 16 GB, 32 GB, 40 GB, 120 GB RAM systems. I do not understand, the pre-process is taken a lot of memory. At 120 GB RAM, first time I can across Memory related issue. Else everytime I tried the process got killed.

Kindly, let me know what was your configuration for running this process. Kindly, add the details of your system so that it will be helpful to me while running the repository.

JafferWilson avatar Sep 13 '17 07:09 JafferWilson

I increased the RAM to 480 GB.. still the pre-process show process killed. Is it possible for you to make the pre-processed data available in the repository?

JafferWilson avatar Sep 13 '17 09:09 JafferWilson

Can you please answer my queries, it will help for sure. Waiting for your reply.

JafferWilson avatar Sep 21 '17 12:09 JafferWilson

I confirm the issue. @JafferWilson did you find a way to make it run?

fievelk avatar Oct 17 '17 13:10 fievelk

@fievelk Yes. The way it is shown in the Read.me file. It is the same way I ran the code.

JafferWilson avatar Oct 18 '17 04:10 JafferWilson

@JafferWilson Sorry, I did not formulate my question correctly. Running the code using the instructions in the README still produces these memory issues and the process gets killed. Did you manage to fix the problem somehow?

fievelk avatar Oct 18 '17 08:10 fievelk

@fievelk Well No... I do not understand why the process is taking so much of Memory. As I have mentioned in the issues what experiment I did and still empty handed.

JafferWilson avatar Oct 18 '17 08:10 JafferWilson

@JafferWilson Please copy the following code instead of the one given here for module load_bin_vec(fname,vocab). This should resolve the issue.

def load_bin_vec(fname, vocab): """ Loads 300x1 word vecs from Google (Mikolov) word2vec """ word_vecs = {} with open(fname, "rb") as f: header = f.readline() vocab_size, layer1_size = map(int, header.split()) binary_len = np.dtype(theano.config.floatX).itemsize * layer1_size for line in xrange(vocab_size): word = [] ch = f.read(1) if ch == ' ': word = ''.join(word) break if ch != '\n': word.append(ch) if tuple(word) in vocab: word_vecs[tuple(word)] = np.fromstring(f.read(binary_len), dtype=theano.config.floatX) else: f.read(binary_len) return word_vecs

kapardine avatar Oct 31 '17 10:10 kapardine

dear @JafferWilson can you slove the problem by using this code?

naikzinal avatar Nov 06 '17 06:11 naikzinal

@naikzinal Sure I will. Just having another problems to solve. As soon free I will.

JafferWilson avatar Nov 06 '17 07:11 JafferWilson

" name 'load_bin_vec' is not defined" i found that error after changing code can you please help me thank you

naikzinal avatar Nov 06 '17 17:11 naikzinal

Attached is the entire file with changed code. Please use this and then check. You must have made some naming error.

shivi mishra

On 6 November 2017 at 22:47, naikzinal [email protected] wrote:

" name 'load_bin_vec' is not defined" i found that error after changing code can you please help me thank you

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/SenticNet/personality-detection/issues/3#issuecomment-342219669, or mute the thread https://github.com/notifications/unsubscribe-auth/AS2vTArnRjrSNrT3BwHuuT7smyaC2uVkks5szz8igaJpZM4PVrL7 .

kapardine avatar Nov 07 '17 14:11 kapardine

dear,@chaisme i solved my naming error but, i still have a memory issue. and i don't get any attachment from you. if you are able to run code then can you please send me your process_data.py file. if you can please send me. and what system requirement is needed for run this code? thank you

naikzinal avatar Nov 07 '17 15:11 naikzinal

Here is the file attached in txt format. Please convert this to python script. No new system requirements needed except the ones already mentioned in the README. process_data.txt

kapardine avatar Nov 07 '17 18:11 kapardine

dear @chaisme ,Thank you for rly. i will try as soon as i can. and here is my eamil_id [email protected] you can mail me on that id. thank you

naikzinal avatar Nov 08 '17 04:11 naikzinal

@naikzinal Why you want it on your email, where as you can download it from here always? Or you can download it now and then upload it on your side.

JafferWilson avatar Nov 08 '17 05:11 JafferWilson

@naikzinal @JafferWilson I have uploaded the txt file in the above comment. Use it as a python script.

kapardine avatar Nov 08 '17 08:11 kapardine

dear @JafferWilson actually i changed the code bt still i have memory isseue thats why i asked for file.now i can run my code.

naikzinal avatar Nov 08 '17 15:11 naikzinal

Initially showed process killed but ran perfectly using the code of @chaisme . Thank you very much.

roysoumya avatar Jan 21 '18 14:01 roysoumya

Hi there,

I am trying to run this app and I seem to get stuck at the training phase:

python conv_net_train.py -static -word2vec 2
loading data... data loaded!
model architecture: CNN-static
using: word2vec vectors
[('image shape', 153, 300), ('filter shape', [(200, 1, 1, 300), (200, 1, 2, 300), (200, 1, 3, 300)]), ('hidden_units', [200, 200, 2]), ('dropout', [0.5, 0.5, 0.5]), ('batch_size', 50), ('non_static', False), ('learn_decay', 0.95), ('conv_non_linear', 'relu'), ('non_static', False), ('sqr_norm_lim', 9), ('shuffle_batch', True)]
... training

When I interrupt the kernel I get:

Traceback (most recent call last):
  File "conv_net_train.py", line 476, in <module>
    activations=[Sigmoid])
  File "conv_net_train.py", line 221, in train_conv_net
    cost_epoch = train_model(minibatch_index)
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/compile/function_module.py", line 903, in __call__
    self.fn() if output_subset is None else\
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 963, in rval
    r = p(n, [x[0] for x in i], o)
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 952, in p
    self, node)
  File "theano/scan_module/scan_perform.pyx", line 397, in theano.scan_module.scan_perform.perform (/Users/jennan/.theano/compiledir_Darwin-16.7.0-x86_64-i386-64bit-i386-2.7.15-64/scan_perform/mod.cpp:4490)
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 961, in rval
    def rval(p=p, i=node_input_storage, o=node_output_storage, n=node,
KeyboardInterrupt

Any help would be greatly appreciated!!

jennaniven avatar Sep 11 '18 17:09 jennaniven

File "conv_net_train.py", line 147, in train_conv_net train_set_x = datasets[0][rand_perm] MemoryError

Please someone help I need the soln asap

vivekraghu17 avatar Apr 28 '19 13:04 vivekraghu17

Hi there,

I am trying to run this app and I seem to get stuck at the training phase:

python conv_net_train.py -static -word2vec 2
loading data... data loaded!
model architecture: CNN-static
using: word2vec vectors
[('image shape', 153, 300), ('filter shape', [(200, 1, 1, 300), (200, 1, 2, 300), (200, 1, 3, 300)]), ('hidden_units', [200, 200, 2]), ('dropout', [0.5, 0.5, 0.5]), ('batch_size', 50), ('non_static', False), ('learn_decay', 0.95), ('conv_non_linear', 'relu'), ('non_static', False), ('sqr_norm_lim', 9), ('shuffle_batch', True)]
... training

When I interrupt the kernel I get:

Traceback (most recent call last):
  File "conv_net_train.py", line 476, in <module>
    activations=[Sigmoid])
  File "conv_net_train.py", line 221, in train_conv_net
    cost_epoch = train_model(minibatch_index)
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/compile/function_module.py", line 903, in __call__
    self.fn() if output_subset is None else\
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 963, in rval
    r = p(n, [x[0] for x in i], o)
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 952, in p
    self, node)
  File "theano/scan_module/scan_perform.pyx", line 397, in theano.scan_module.scan_perform.perform (/Users/jennan/.theano/compiledir_Darwin-16.7.0-x86_64-i386-64bit-i386-2.7.15-64/scan_perform/mod.cpp:4490)
  File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 961, in rval
    def rval(p=p, i=node_input_storage, o=node_output_storage, n=node,
KeyboardInterrupt

Any help would be greatly appreciated!!

Same here. Anyone regarding this, your recommendation would be highly appreciated.

CyraxSector avatar Mar 27 '20 00:03 CyraxSector