PeerQA: A Scientific Question Answering Dataset from Peer Reviews

We present PeerQA, a real-world, scientific, document-level Question Answering (QA) dataset. PeerQA questions have been sourced from peer reviews, which contain questions that reviewers raised while thoroughly examining the scientific article. Answers have been annotated by the original authors of each paper. The dataset contains 579 QA pairs from 208 academic articles, with a majority from ML and NLP, as well as a subset of other scientific communities like Geoscience and Public Health. PeerQA supports three critical tasks for developing practical QA systems: Evidence retrieval, unanswerable question classification, and answer generation. We provide a detailed analysis of the collected dataset and conduct experiments establishing baseline systems for all three tasks. Our experiments and analyses reveal the need for decontextualization in document-level retrieval, where we find that even simple decontextualization approaches consistently improve retrieval performance across architectures. On answer generation, PeerQA serves as a challenging benchmark for long-context modeling, as the papers have an average size of 12k tokens.

Identifier
Source https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/4467
Metadata Access https://tudatalib.ulb.tu-darmstadt.de/oai/openairedata?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:tudatalib.ulb.tu-darmstadt.de:tudatalib/4467
Provenance
Creator Baumgärtner, Tim; Briscoe, Ted; Gurevych, Iryna
Publisher TU Darmstadt
Contributor Deutsche Forschungsgemeinschaft; TU Darmstadt
Publication Year 2025
Funding Reference Deutsche Forschungsgemeinschaft info:eu-repo/grantAgreement/DFG/GU798/18-3/QAScilnf:Automatisc
Rights CC-BY-NC-SA-4.0; info:eu-repo/semantics/openAccess
OpenAccess true
Contact https://tudatalib.ulb.tu-darmstadt.de/page/contact
Representation
Language English
Resource Type Dataset
Format application/zip
Version 1.0
Discipline Other