资源 | 问答阅读理解资源列表-人工智能-热点资讯-野望文存-科技 
    欢迎来到野望文存-科技!
当前位置:野望文存-科技 > 热点资讯 > 人工智能 >  资源 | 问答阅读理解资源列表

资源 | 问答阅读理解资源列表

发表时间:2020-08-07 21:56:00  来源:野望文存  浏览:次   【】【】【

Github 地址:


https://github.com/xanhho/Reading-Comprehension-Question-Answering-Papers


综述论文


  • Chengchang Zeng et al., A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics, and Benchmark Datasets, arXiv, 2020, paper.

  • Matthew Gardner et al., On Making Reading Comprehension More Comprehensive., aclweb, 2019, paper.

  • Razieh Baradaran, Razieh Ghiasi, and Hossein Amirkhani, A Survey on Machine Reading Comprehension Systems, arXiv, 6 Jan 2020, paper.

  • Shanshan Liu et al., Neural Machine Reading Comprehension: Methods and Trends, arXiv, 2019, paper.

  • Xin Zhang et al., Machine Reading Comprehension: a Literature Review, arXiv, 2019, paper.

  • Boyu Qiu et al., A Survey on Neural Machine Reading Comprehension, arXiv, 2019, paper.

  • Danqi Chen: Neural Reading Comprehension and Beyond. PhD thesis, Stanford University, 2018, paper.


PPT



  • Sebastian Riedel, Reading and Reasoning with Neural Program Interpreters, slides, MRQA 2018.

  • Phil Blunsom, Data driven reading comprehension: successes and limitations, slides, MRQA 2018.

  • Jianfeng Gao, Multi-step reasoning neural networks for question answering, slides, MRQA 2018.

  • Sameer Singh, Questioning Question Answering Answers, slides, MRQA 2018.

评估论文


  • Diana Galvan, Active Reading Comprehension: A dataset for learning the Question-Answer Relationship strategy, ACL 2019, paper.

  • Divyansh Kaushik and Zachary C. Lipton, How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks, EMNLP 2018, paper.

  • Saku Sugawara et al., What Makes Reading Comprehension Questions Easier?, EMNLP 2018, paper.

  • Pramod K. Mudrakarta et al., Did the Model Understand the Question?, ACL 2018, paper.

  • Robin Jia and Percy Liang, Adversarial Examples for Evaluating Reading Comprehension Systems, EMNLP 2017, paper.

  • Saku Sugawara et al., Evaluation Metrics for Machine Reading Comprehension: Prerequisite Skills and Readability, ACL 2017, paper.

  • Saku Sugawara et al., Prerequisite Skills for Reading Comprehension: Multi-perspective Analysis of MCTest Datasets and Systems, AAAI 2017, paper.

  • Danqi Chen et al., A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task, ACL 2016, paper.

其他相关论文(数据增强、迁移学习等)


  • Minghao Hu, Yuxing Peng, Zhen Huang and Dongsheng Li, A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning, EMNLP 2019, paper.

  • Huazheng Wang, Zhe Gan, Xiaodong Liu, Jingjing Liu, Jianfeng Gao and Hongning Wang, Adversarial Domain Adaptation for Machine Reading Comprehension, EMNLP 2019, paper.

  • Yimin Jing, Deyi Xiong and Zhen Yan, BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels, EMNLP 2019, paper.

  • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang and Guoping Hu, Cross-Lingual Machine Reading Comprehension, EMNLP 2019, paper.

  • Todor Mihaylov and Anette Frank, Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension, EMNLP 2019, paper.

  • Kyungjae Lee, Sunghyun Park, Hojae Han, Jinyoung Yeo, Seung-won Hwang and Juho Lee, Learning with Limited Data for Multilingual Reading Comprehension, EMNLP 2019, paper.

  • Qiu Ran, Yankai Lin, Peng Li, Jie Zhou and Zhiyuan Liu, NumNet: Machine Reading Comprehension with Numerical Reasoning, EMNLP 2019, paper.

  • Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang and Guoping Hu, A Span-Extraction Dataset for Chinese Machine Reading Comprehension, EMNLP 2019, paper.

  • Daniel Andor, Luheng He, Kenton Lee and Emily Pitler, Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension, EMNLP 2019, paper.

  • Tsung-Yuan Hsu, Chi-Liang Liu and Hung-yi Lee, Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model, EMNLP 2019, paper.

  • Kyosuke Nishida et al., Multi-style Generative Reading Comprehension, ACL 2019, paper.

  • Alon Talmor and Jonathan Berant, MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension, ACL 2019, paper.

  • Yi Tay et al., Simple and Effective Curriculum Pointer-Generator Networks for Reading Comprehension over Long Narratives, ACL 2019, paper.

  • Haichao Zhu et al., Learning to Ask Unanswerable Questions for Machine Reading Comprehension, ACL 2019, paper.

  • Patrick Lewis et al., Unsupervised Question Answering by Cloze Translation, ACL 2019, paper.

  • Michael Hahn and Frank Keller, Modeling Human Reading with Neural Attention, EMNLP 2016, paper.

  • Jianpeng Cheng et al., Long Short-Term Memory-Networks for Machine Reading, EMNLP 2016, paper.


想要了解更多资讯,请扫描下方二维码,关注机器学习研究会

                                          


转自:数据派THU

责任编辑:蔡学森