Abstractive-Summarization-With-Transfer-Learning icon indicating copy to clipboard operation
Abstractive-Summarization-With-Transfer-Learning copied to clipboard

Abstractive summarisation using Bert as encoder and Transformer Decoder

Results 30 Abstractive-Summarization-With-Transfer-Learning issues
Sort by recently updated
recently updated
newest added

How can I get a abstract quickly, now I need long time and many cpu source to count it, if have a fast solution ?

Hello! I tried your code in a google colab and i encountered a problem i wasn't able to solve. During the inititalization of the Bert encoder in your ipynb: https://github.com/santhoshkolloju/Abstractive-Summarization-With-Transfer-Learning/blob/master/BERT_SUMM.ipynb...

` # Creates segment embeddings for each type of tokens. segment_embedder = tx.modules.WordEmbedder( vocab_size=bert_config.type_vocab_size, hparams=bert_config.segment_embed) segment_embeds = segment_embedder(src_segment_ids) input_embeds = word_embeds + segment_embeds` As per BERT paper, the input embeddings...

The network for summarization is not optimised. Therefore the loss is too high and does not reduce much.

How did you get the data txt files ? How did you process it ? Did you tokenize it ?

@santhoshkolloju when you train this model, do you use bert embedding as abstract embedding, it mean article and abstract will go through bert when train this summary model?

@santhoshkolloju Hi, i'm using your code to train on my own data, but i find that the bleu score in your code is multiplied by 100, and I am wondering...

File "texar_repo\texar\core\layers.py", line 628, in class _ReducePooling1D(tf.layers.Layer): AttributeError: module 'tensorflow.python.layers.layers' has no attribute 'Layer' what should i do ? thanks