Figure 3 Attention Probability for decoding on a SUMPUBMED example where the attention corresponding to word ‘present’ in the generated summary is shown.
Related Figures (8)
Figure 1: SUMPUBMED creation pipeline. Table 1: Average number of sentences and words in the abstract and text in the three SUMPUBMED versions Table 3: Pearson’s correlation between ROUGE scores and human ratings on SUMPUBMED’s noun-phrase version Table 5: ROUGE scores on CNN-Dailymail (CNN-DM) and DUC 2001 dataset (DUC) using seq2seq models Table 6: ROUGE scores of noun-phrase SUMPUBMED version using a seq2seq model of varying decoding steps Table 7: Results for TextRank an Extractive Summarization approach on hybrid version of the SUMPUBMED. Table 8: ROUGE scores on hybrid version of the SUMPUBMED using Hybrid model: TextRank + seq2seq models Table 9: ROUGE comparison on SUMPUBMED. seq?2seq abstractive methods’ target summary is of 250 words
Connect with 287M+ leading minds in your field
Discover breakthrough research and expand your academic network