How-to-build-own-text-summarizer-using-deep-learning icon indicating copy to clipboard operation
How-to-build-own-text-summarizer-using-deep-learning copied to clipboard

In this notebook, we will build an abstractive based text summarizer using deep learning from the scratch in python using keras

Results 16 How-to-build-own-text-summarizer-using-deep-learning issues
Sort by recently updated
recently updated
newest added

How can i use the model to compute prediction for an inserted review by the user!! Any help! And how to overcome the problem of duplication in the output summary??!

Google colab taking too much time and crash

The link you gave for attention layer in the starting of repo is not available please check!

There is problem is training. the dimensions of decoder_outputs and attention_output

#prepare a tokenizer for reviews on training data y_tokenizer = Tokenizer(num_words=tot_cnt-cnt) y_tokenizer.fit_on_texts(list(y_tr)) #convert text sequences into integer sequences y_tr_seq = y_tokenizer.texts_to_sequences(y_tr) y_val_seq = y_tokenizer.texts_to_sequences(y_val) #padding zero upto maximum length y_tr...

When i am running the code in model building, the following line of code is showing error # Encoder encoder_inputs = input(shape=(max_text_len,)) TypeError: raw_input() got an unexpected keyword argument 'shape'

Can you please explain what is happening where you are sending in y_tr[:,:-1] into as X into the model. ``` history=model.fit([x_tr,y_tr[:,:-1]], y_tr.reshape(y_tr.shape[0],y_tr.shape[1], 1)[:,1:] , epochs=50, callbacks=[es], batch_size=128, validation_data=([x_val,y_val[:,:-1]], y_val.reshape(y_val.shape[0],y_val.shape[1], 1)[:,1:]))...

Hello, @aravindpai and all, I am facing issue with history=model.fit([x_tr,y_tr[:,:-1]], y_tr.reshape(y_tr.shape[0],y_tr.shape[1], 1)[:,1:] ,epochs=50,callbacks=[es],batch_size=512, validation_data=([x_val,y_val[:,:-1]], y_val.reshape(y_val.shape[0],y_val.shape[1], 1)[:,1:])) TypeError: Expected Operation, Variable, or Tensor, got 1. Can somebody help me, I don't...