Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
📄 Abstract
Abstract: Peer review is central to academic publishing, but the growing volume of
submissions is straining the process. This motivates the development of
computational approaches to support peer review. While each review is tailored
to a specific paper, reviewers often make assessments according to certain
aspects such as Novelty, which reflect the values of the research community.
This alignment creates opportunities for standardizing the reviewing process,
improving quality control, and enabling computational support. While prior work
has demonstrated the potential of aspect analysis for peer review assistance,
the notion of aspect remains poorly formalized. Existing approaches often
derive aspects from review forms and guidelines, yet data-driven methods for
aspect identification are underexplored. To address this gap, our work takes a
bottom-up approach: we propose an operational definition of aspect and develop
a data-driven schema for deriving aspects from a corpus of peer reviews. We
introduce a dataset of peer reviews augmented with aspects and show how it can
be used for community-level review analysis. We further show how the choice of
aspects can impact downstream applications, such as LLM-generated review
detection. Our results lay a foundation for a principled and data-driven
investigation of review aspects, and pave the path for new applications of NLP
to support peer review.
Key Contributions
This paper addresses the under-exploration of data-driven methods for identifying aspects in peer reviews by proposing an operational definition of 'aspect' and developing a schema derived from a corpus. This bottom-up approach aims to formalize the notion of aspect, enabling more standardized and computationally supported peer review processes.
Business Value
Automating and standardizing aspects of the peer review process can significantly reduce the burden on editors and reviewers, accelerate publication timelines, and improve the consistency and quality of scientific evaluations.