Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 88% Match Research Paper AI Ethicists,Social Scientists,AI Researchers,Philosophers,Sociologists 4 weeks ago

Language Models Surface the Unwritten Code of Science and Society

large-language-models › alignment
📄 Abstract

Abstract: This paper calls on the research community not only to investigate how human biases are inherited by large language models (LLMs) but also to explore how these biases in LLMs can be leveraged to make society's "unwritten code" - such as implicit stereotypes and heuristics - visible and accessible for critique. We introduce a conceptual framework through a case study in science: uncovering hidden rules in peer review - the factors that reviewers care about but rarely state explicitly due to normative scientific expectations. The idea of the framework is to push LLMs to speak out their heuristics through generating self-consistent hypotheses - why one paper appeared stronger in reviewer scoring - among paired papers submitted to 45 academic conferences, while iteratively searching deeper hypotheses from remaining pairs where existing hypotheses cannot explain. We observed that LLMs' normative priors about the internal characteristics of good science extracted from their self-talk, e.g., theoretical rigor, were systematically updated toward posteriors that emphasize storytelling about external connections, such as how the work is positioned and connected within and across literatures. Human reviewers tend to explicitly reward aspects that moderately align with LLMs' normative priors (correlation = 0.49) but avoid articulating contextualization and storytelling posteriors in their review comments (correlation = -0.14), despite giving implicit reward to them with positive scores. These patterns are robust across different models and out-of-sample judgments. We discuss the broad applicability of our proposed framework, leveraging LLMs as diagnostic tools to amplify and surface the tacit codes underlying human society, enabling public discussion of revealed values and more precisely targeted responsible AI.

Key Contributions

Proposes a conceptual framework to leverage LLMs for making society's 'unwritten code' (implicit biases, heuristics) visible and accessible for critique. Using a case study in science peer review, it demonstrates how LLMs can generate hypotheses about hidden factors influencing reviewer scores, thereby surfacing implicit norms.

Business Value

This research offers a novel perspective on using AI not just for task completion but as a tool for societal introspection and critique. It can inform the development of AI systems that help identify and mitigate biases in various professional and social contexts.