Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 95% Match Research Paper Computational Biologists,ML Researchers,Bioinformaticians,Drug Discovery Scientists 2 weeks ago

g-DPO: Scalable Preference Optimization for Protein Language Models

large-language-models › alignment
📄 Abstract

Abstract: Direct Preference Optimization (DPO) is an effective approach for aligning protein language models with experimental design goals. However, DPO faces a scalability bottleneck: the number of possible training pairs grows quadratically with the number of labeled sequences, leading to prohibitive training times even for modestly sized datasets. We introduce g-DPO, a framework that (i) uses sequence space clustering to prune redundant pairs while preserving training signal, and (ii) amortizes likelihood computations with group-based approximations. Across three protein engineering tasks, g-DPO maintains in-silico and in-vitro performance that is statistically indistinguishable from standard DPO, while converging 1.8 to 3.7 times faster, with greater gains expected as the size of the dataset increases.
Authors (6)
Constance Ferragu
Jonathan D. Ziegler
Nicolas Deutschmann
Arthur Lindoulsi
Eli Bixby
Cradle ML Team
Submitted
October 22, 2025
arXiv Category
cs.LG
arXiv PDF

Key Contributions

g-DPO addresses the scalability bottleneck of Direct Preference Optimization (DPO) for protein language models by introducing sequence space clustering to prune redundant pairs and using group-based approximations for amortized likelihood computations. This framework achieves comparable performance to standard DPO but converges significantly faster.

Business Value

Accelerates the design and optimization of proteins for various applications (e.g., therapeutics, enzymes), reducing R&D costs and time-to-market for new biologics and biomaterials.