Redirecting to original paper in 30 seconds...
Click below to go immediately or wait for automatic redirect
π Abstract
Abstract: Policy Cards are introduced as a machine-readable, deployment-layer standard
for expressing operational, regulatory, and ethical constraints for AI agents.
The Policy Card sits with the agent and enables it to follow required
constraints at runtime. It tells the agent what it must and must not do. As
such, it becomes an integral part of the deployed agent. Policy Cards extend
existing transparency artifacts such as Model, Data, and System Cards by
defining a normative layer that encodes allow/deny rules, obligations,
evidentiary requirements, and crosswalk mappings to assurance frameworks
including NIST AI RMF, ISO/IEC 42001, and the EU AI Act. Each Policy Card can
be validated automatically, version-controlled, and linked to runtime
enforcement or continuous-audit pipelines. The framework enables verifiable
compliance for autonomous agents, forming a foundation for distributed
assurance in multi-agent ecosystems. Policy Cards provide a practical mechanism
for integrating high-level governance with hands-on engineering practice and
enabling accountable autonomy at scale.
Submitted
October 28, 2025
Key Contributions
This paper introduces 'Policy Cards,' a machine-readable standard for expressing and enforcing operational, regulatory, and ethical constraints on AI agents at runtime. Policy Cards act as an integral part of deployed agents, defining allow/deny rules and obligations. They extend existing transparency artifacts (like Model Cards) by providing a normative layer that can be automatically validated, version-controlled, and linked to enforcement pipelines, enabling verifiable compliance for autonomous agents.
Business Value
Facilitates safer and more compliant deployment of AI systems, reducing risks associated with regulatory violations and ethical breaches, particularly crucial for businesses in regulated sectors.