Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
Introduces a contrastive learning framework specifically designed to address bias and learn fair representations in tabular datasets. It utilizes customized augmenting methods to enhance fairness while preserving essential information for prediction tasks, tackling the underexplored issue of fairness in learned representations for tabular data.
Enables the development of more equitable AI systems, reducing discriminatory outcomes in sensitive applications like credit scoring, hiring, and medical diagnosis, thereby enhancing trust and compliance.