Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_ml 90% Match Research Paper ML theorists,Researchers in adversarial ML and AI safety,Economists studying strategic behavior,Security researchers 20 hours ago

Strategic Classification with Non-Linear Classifiers

ai-safety › robustness
📄 Abstract

Abstract: In strategic classification, the standard supervised learning setting is extended to support the notion of strategic user behavior in the form of costly feature manipulations made in response to a classifier. While standard learning supports a broad range of model classes, the study of strategic classification has, so far, been dedicated mostly to linear classifiers. This work aims to expand the horizon by exploring how strategic behavior manifests under non-linear classifiers and what this implies for learning. We take a bottom-up approach showing how non-linearity affects decision boundary points, classifier expressivity, and model class complexity. Our results show how, unlike the linear case, strategic behavior may either increase or decrease effective class complexity, and that the complexity decrease may be arbitrarily large. Another key finding is that universal approximators (e.g., neural nets) are no longer universal once the environment is strategic. We demonstrate empirically how this can create performance gaps even on an unrestricted model class.

Key Contributions

This work expands the study of strategic classification to non-linear classifiers, revealing that strategic behavior can increase or decrease effective class complexity, potentially arbitrarily. A key finding is that universal approximators may lose their universality in strategic environments, highlighting implications for model robustness and learning.

Business Value

Understanding how models behave under strategic manipulation is crucial for building robust and secure AI systems, preventing adversarial attacks and ensuring fair outcomes.