[1] |
Aone C, Okurowski M E, Gorlinsky J, et al. A trainable summarizer with knowledge acquired from robust NLP techniques[C]. Montreal:The Thirty-sixth Annual Meeting of the Association for Computational Linguistics, 1999.
|
[2] |
Lin C Y. Training a selection function for extraction[C]. New York:The Eighth International Conference on Information and Knowledge Management, 1999.
|
[3] |
Mihalcea R, Tarau P. TextRank:bringing order into texts[C]. Barcelona:Conference on Empirical Methods in Natural Language Processing, 2004.
|
[4] |
Erkan, Radev. Dragomir R.LexRank: graph-based lexical centrality as salience in text summarization[J]. Journal of Artificial Intelligence Research, 2004,22(1):457-479.
doi: 10.1613/jair.1523
|
[5] |
Lopyrev K. Generating news headlines with recurrent neural networks[J]. Computer Science, 2015,2(2):159-165.
|
[6] |
黄鑫池. 神经网络自动生成汉语新闻标题的应用[J]. 电子技术与软件工程, 2018(22):134-135.
|
|
Huang Xinchi. Application of neural network to automatically generate Chinese news headlines[J]. Electronic Technology & Software Engineering, 2018(22):134-135.
|
[7] |
Hou L W, Hu P, Chao B. Abstractive document summarization via neural model with joint attention[C]. Dalian:Proceedings of the Sixth Conference on Natural Language Processing and Chinese Computing, 2017.
|
[8] |
Ranzato M A, Chopra S, Auli M, et al. Sequence level training with recurrent neural networks[J/OL]. (2016-05-06)[2019-11-28] .
|
[9] |
He Z J, Liu Q, Lin S X. Improving statistical machine translation using lexicalized rule selection[C]. Manchester:International Conference on Coling, 2008.
|
[10] |
段雨佳, 鞠婷. 基于深度学习的代码审查意见有效性评估[J]. 电子科技, 2020,33(1):39-45.
|
|
Duan Yujia, Ju Ting. Evaluation of code review comments based on deep learning[J]. Electronic Science and Technology, 2020,33(1):39-45.
|
[11] |
Yoshua B, Réjean D, Pascal V, et al. A neural probabilistic language model[J]. Journal of Machine Learning Research, 2003,(3):1137-1155.
|
[12] |
缪广寒. 基于Word2vec和SVM的微博情感挖掘与仿真分析[J]. 电子科技, 2018,31(5):81-83.
|
|
Miu Guanghan. Emotion mining and simulation analysis of microblogging based on Word2vec and SVM[J]. Electronic Science and Technology, 2018,31(5):81-83.
|
[13] |
Sutskever I, Vinyals O, Le Q V V. Sequence to sequence learning with neural networks[C]. Montreal:Advances in Neural Information Processing Systems, 2014.
|
[14] |
Hsieh Y L, Liu S H, Chen K Y, et al. Exploiting sequence-to-sequence generation framework for automatic abstractive summarization[C]. Tainan: Proceedings of the Twenty-eighth Conference on Computational Linguistics and Speech Processing, 2016.
|
[15] |
尹光花, 刘小明, 张露, 等. 基于LSTM特征模板的短文本情感要素分析与研究[J]. 电子科技, 2018,31(11):38-41,46.
|
|
Yin Guanghua, Liu Xiaoming, Zhang Lu, et al. Sentiment elements of internet short texts for analysis and research based on LSTM network mode[J]. Electronic Science and Technology, 2018,31(11):38-41,46.
|
[16] |
曾浩, 尚维来. Python界面程序开发应用技术[J]. 科教文汇, 2010,(10):95-97.
|
|
Zeng Hao, Shang Weilai. Development and application technology of Python interface program[J]. The Science Education Article Collects, 2010,(10):95-97.
|
[17] |
Chen Q C, Hu B T, Zhu F Z. LCSTS:A large scale Chinese short text summarization dataset[C]. Lisbon:Conference on Empirical Methods in Natural Language Processing, 2015.
|
[18] |
Lin C Y. ROUGE:A package for automatic evaluation of summaries[C]. Washington D.C.:Proceedings of the Workshop on Text Summarization Branches Out, 2004.
|
[19] |
Erkan G, Radev D R. LexPageRank:prestige in multidocument text summarization[C]. Barcelona: Proceedings of the Conference on Empirical Methods on Natural Language Processing, 2004.
|
[20] |
Radev D R, Blair G S, Zhang Z. Experiments in single and multi-document summarization using MEAD[J]. Ann Arbor, 2004,10(1):48109-48117.
|