Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper AI Safety Researchers,LLM Developers,ML Engineers 2 days ago

Contrastive Knowledge Transfer and Robust Optimization for Secure Alignment of Large Language Models

ai-safety › alignment
📄 Abstract

Abstract: This paper addresses the limitations of large-scale language models in safety alignment and robustness by proposing a fine-tuning method that combines contrastive distillation with noise-robust training. The method freezes the backbone model and transfers the knowledge boundaries of the teacher model to the student model through distillation, thereby improving semantic consistency and alignment accuracy. At the same time, noise perturbations and robust optimization constraints are introduced during training to ensure that the model maintains stable predictive outputs under noisy and uncertain inputs. The overall framework consists of distillation loss, robustness loss, and a regularization term, forming a unified optimization objective that balances alignment ability with resistance to interference. To systematically validate its effectiveness, the study designs experiments from multiple perspectives, including distillation weight sensitivity, stability analysis under computation budgets and mixed-precision environments, and the impact of data noise and distribution shifts on model performance. Results show that the method significantly outperforms existing baselines in knowledge transfer, robustness, and overall safety, achieving the best performance across several key metrics. This work not only enriches the theoretical system of parameter-efficient fine-tuning but also provides a new solution for building safer and more trustworthy alignment mechanisms.
Authors (5)
Jiasen Zheng
Huajun Zhang
Xu Yan
Ran Hao
Chong Peng
Submitted
October 31, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This paper proposes a fine-tuning method combining contrastive distillation and noise-robust training to enhance LLM safety alignment and robustness. It transfers knowledge from a teacher to a student model while ensuring stable outputs under noisy inputs, balancing alignment with interference resistance.

Business Value

Improving the safety and reliability of LLMs is critical for their adoption in sensitive applications, reducing risks associated with misinformation, bias, or unpredictable behavior.