Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cv 95% Match Research Paper AI Researchers,Machine Learning Engineers,Security Professionals,Developers of Robust AI Systems 2 days ago

C-LEAD: Contrastive Learning for Enhanced Adversarial Defense

ai-safety › robustness
📄 Abstract

Abstract: Deep neural networks (DNNs) have achieved remarkable success in computer vision tasks such as image classification, segmentation, and object detection. However, they are vulnerable to adversarial attacks, which can cause incorrect predictions with small perturbations in input images. Addressing this issue is crucial for deploying robust deep-learning systems. This paper presents a novel approach that utilizes contrastive learning for adversarial defense, a previously unexplored area. Our method leverages the contrastive loss function to enhance the robustness of classification models by training them with both clean and adversarially perturbed images. By optimizing the model's parameters alongside the perturbations, our approach enables the network to learn robust representations that are less susceptible to adversarial attacks. Experimental results show significant improvements in the model's robustness against various types of adversarial perturbations. This suggests that contrastive loss helps extract more informative and resilient features, contributing to the field of adversarial robustness in deep learning.
Authors (3)
Suklav Ghosh
Sonal Kumar
Arijit Sur
Submitted
October 31, 2025
arXiv Category
cs.CV
arXiv PDF

Key Contributions

This paper introduces C-LEAD, a novel approach for adversarial defense using contrastive learning, a previously unexplored area. By training DNNs with both clean and adversarially perturbed images using a contrastive loss, the model learns robust representations less susceptible to attacks, significantly improving robustness.

Business Value

Enhances the security and reliability of AI systems deployed in sensitive applications like autonomous driving, medical diagnosis, and security systems, where adversarial attacks pose a significant risk.