Regulatory Divergence in AI Agent Security: How EU, US, and APAC Frameworks Create Compliance Complexity
How the EU AI Act, US EO 14110, UK AI Opportunities Action Plan, Singapore's Model AI Governance Framework, and China's AI regulations create competing compliance obligations for globally-deployed AI agents—and how to navigate them.
Regulatory Divergence in AI Agent Security: How EU, US, and APAC Frameworks Create Compliance Complexity
When the European Parliament passed the EU AI Act in March 2024, it created the world's first comprehensive statutory framework for AI governance. Within months, every major jurisdiction had either accelerated existing AI regulatory work or announced new initiatives in response. The result, by 2026, is a fractured regulatory landscape where a single AI agent serving users across multiple jurisdictions must simultaneously navigate five or more distinct regulatory frameworks—each with different definitions, different risk classifications, different documentation requirements, and different enforcement mechanisms.
For AI agents specifically, this regulatory divergence creates challenges that compound the inherent technical complexity of agent deployment. An agent that autonomously takes consequential actions is simultaneously subject to EU AI Act high-risk AI provisions (if it affects employment, credit, or fundamental rights), US CISA guidelines (if it touches critical infrastructure), UK ICO guidance (if it processes personal data of UK residents), Singapore's MAS technology risk management guidelines (if it's deployed in financial services), and Chinese algorithm regulations (if it serves Chinese users). Each framework has different thresholds, different obligations, and different enforcement approaches.
This analysis provides practitioners with a structured view of the regulatory landscape, identifies genuine conflicts between frameworks, maps the common denominator compliance strategy that satisfies multiple frameworks simultaneously, and explains where regulatory arbitrage creates risk rather than opportunity.
The EU AI Act: The Gravitational Center
Risk Classification Architecture
The EU AI Act's risk-based approach has become the de facto global benchmark, not because it's mandatory globally, but because it provides the clearest regulatory articulation of which AI applications are highest-risk and what compliance looks like. Organizations designing AI governance programs look to the EU AI Act as a structured framework even when they're not EU-regulated entities.
The Act classifies AI systems into four risk tiers:
Unacceptable Risk (Prohibited): AI systems that pose unacceptable risks to fundamental rights are banned:
- Social scoring by public authorities (Article 5(1)(c))
- Real-time remote biometric identification in public spaces for law enforcement with limited exceptions (Article 5(1)(d) and (e))
- AI systems that exploit vulnerabilities of specific groups (Article 5(1)(b))
- Subliminal manipulation techniques (Article 5(1)(a))
For AI agents, the most relevant prohibition is the ban on systems that "deploy subliminal techniques beyond a person's consciousness" to materially distort behavior. Agents that use persuasion techniques that operate below conscious awareness—for example, agents engineered to exploit cognitive biases in ways users cannot detect—are prohibited.
High Risk (Regulated, Article 6 and Annex III): AI systems used in specific high-risk applications face mandatory compliance obligations. The list in Annex III includes:
- Annex III(1): Biometric identification and categorization
- Annex III(2): Management of critical infrastructure (traffic management, water, gas, heating)
- Annex III(3): Education and vocational training (determining access, evaluating outcomes)
- Annex III(4): Employment (recruitment, promotion, termination, task allocation, monitoring performance)
- Annex III(5): Access to essential private services (creditworthiness assessment, insurance risk assessment)
- Annex III(6): Law enforcement
- Annex III(7): Migration, asylum, and border control
- Annex III(8): Administration of justice and democratic processes
For AI agents, Annex III(4) (employment) and Annex III(5) (essential private services) are the most commonly triggered. An AI agent that:
- Screens job applicants or ranks candidates → High risk (employment, article 6)
- Evaluates credit risk or insurance eligibility → High risk (essential services, article 6)
- Allocates tasks to human workers based on monitoring → High risk (employment)
- Makes recommendations that significantly influence employment decisions → High risk
High-risk AI systems have mandatory compliance obligations:
- Risk management system (Article 9)
- Data governance (Article 10)
- Technical documentation (Article 11, Annex IV)
- Record-keeping and logging (Article 12)
- Transparency and information to users (Article 13)
- Human oversight capability (Article 14)
- Accuracy, robustness, and cybersecurity (Article 15)
Limited Risk (Transparency Obligations): AI systems that interact with humans or generate content have transparency obligations:
- Chatbots and conversational agents must disclose they are AI (Article 52(1))
- Deep fakes must be labeled as synthetic content (Article 52(3))
- Emotion recognition must disclose to affected persons (Article 52(2))
Minimal Risk (No Specific Requirements): All other AI systems face no mandatory requirements under the Act, though voluntary codes of conduct are encouraged.
Annex IV: Documentation Requirements That Matter for Agents
For high-risk AI agents, Annex IV specifies detailed technical documentation. This is where the standards work done for ISO/IEC 42001 and ISO/IEC 5338 directly connects—Annex IV documentation requirements closely parallel what those standards require.
Required documentation includes:
- General description: Purpose, intended use, categories of persons affected, contexts of use, potential harms, technical specifications
- Detailed description of elements and development: Training methodology, training data characteristics, evaluation procedures and results, known limitations
- Information on monitoring, functioning, and control: Human oversight measures, technical measures for robustness and accuracy, logging specifications
- Description of any preliminary conformity assessment: Including testing methodology and results
- EU declaration of conformity: Signed declaration that the system meets Act requirements
- Technical measures for post-market monitoring: How the system is monitored after deployment
For AI agents specifically, the logging requirements (Article 12) are demanding: systems "shall have the capability to automatically generate logs." These logs must "record the period of each use of the system," "the reference database against which the input data has been checked," and other operational parameters. This creates a direct requirement for tamper-evident behavioral audit logs—exactly what Armalo's Merkle-tree based audit log infrastructure provides.
Conformity Assessment and CE Marking
High-risk AI systems in most Annex III categories can self-assess conformity (internal conformity assessment). But for certain categories—biometric systems, law enforcement AI, critical infrastructure AI—third-party conformity assessment by a notified body is required.
AI agents in employment, credit, and essential services contexts can generally self-assess. This means maintaining internal compliance documentation sufficient to demonstrate conformity, rather than obtaining third-party certification. However, organizations must register high-risk AI systems in the EU database (Article 71) before market placement.
Post-Market Monitoring: High-risk AI systems must implement post-market monitoring systems. For AI agents, this means:
- Systematic collection of behavioral data post-deployment
- Detection of unexpected outputs or behaviors
- Reporting of serious incidents to national market surveillance authorities within 15 days (Article 73)
- Annual reporting to market surveillance authorities
The Article 73 incident reporting requirement creates a new operational obligation for organizations deploying AI agents: a defined process for detecting serious AI incidents, investigating them, and reporting to regulators within 15 days. Organizations without structured behavioral monitoring capability cannot realistically meet this requirement.
GPAI Model Requirements (Article 51-55)
The EU AI Act's requirements for General-Purpose AI (GPAI) models—foundation models like GPT-4, Claude, and Gemini—create upstream obligations that affect AI agent deployments:
All GPAI models must:
- Maintain technical documentation
- Provide information to downstream providers
- Have copyright policy and comply with copyright law
- Publish summary of training data
GPAI models with systemic risk (above 10^25 FLOP threshold) additionally must:
- Conduct adversarial testing (red-teaming)
- Report serious incidents to the Commission
- Implement cybersecurity measures
- Assess and mitigate systemic risks
These requirements create a supply chain effect: AI agent developers using GPAI models have a legitimate expectation that the model provider has met its GPAI obligations, including providing technical documentation. Organizations building agents on top of GPAI models should require evidence of GPAI compliance as part of their vendor assessment.
The United States Framework: Voluntary Coordination with Sectoral Mandates
Executive Order 14110 (Superseded by EO 14179)
President Biden's Executive Order 14110 (October 2023) established a framework for safe, secure, and trustworthy AI development in the US. It directed federal agencies to develop specific guidance, required AI safety testing reporting from developers of the most powerful systems, and established standards development priorities for NIST.
President Trump's Executive Order 14179 (January 2025) rescinded EO 14110 and replaced it with an emphasis on removing regulatory barriers to US AI development and leadership. The substantive compliance frameworks developed under EO 14110 (NIST AI RMF 1.0, CISA guidance, sector-specific agency guidance) remain in effect as agency guidance documents, even though the umbrella executive order was rescinded.
The US approach is fundamentally different from the EU's: voluntary frameworks with sectoral mandatory requirements where existing law applies (financial services, healthcare, critical infrastructure).
NIST AI RMF 1.0: The De Facto US Standard
NIST AI RMF 1.0 (January 2023) and its accompanying Playbook provide the most widely-referenced US framework for AI risk management. Unlike the EU AI Act, the AI RMF is:
- Voluntary (not legally mandated except where sectoral regulators require it)
- Outcome-focused (principles and practices, not specific requirements)
- Lifecycle-spanning (covers development through deployment and monitoring)
The AI RMF's four core functions (GOVERN, MAP, MEASURE, MANAGE) map to ISO/IEC 23894's risk management process, facilitating joint compliance. Organizations implementing the AI RMF for US compliance purposes can align it with ISO/IEC 42001 for EU compliance purposes with significant overlap.
The AI RMF's trustworthiness characteristics for AI systems—valid, reliable, safe, secure and resilient, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed—align closely with ISO/IEC TR 24368's ethical framework. This alignment is intentional: the frameworks were developed in coordination.
For AI agents specifically, the AI RMF's MEASURE function calls for metrics covering:
| Characteristic | Example Metrics for AI Agents |
|---|---|
| Validity | Accuracy on adversarial benchmark suites |
| Reliability | Behavioral consistency across repeated inputs |
| Safety | Rate of harmful output generation |
| Security | Resistance to prompt injection, extraction |
| Explainability | Proportion of decisions with available rationale |
| Privacy | PII detection and protection rate |
| Fairness | Demographic parity across affected populations |
Armalo's composite trust scoring covers six of these seven characteristics through its 12-dimension scoring framework, providing NIST AI RMF metric coverage as a byproduct of ongoing evaluation.
Sector-Specific US Mandates
Where the US has mandatory AI requirements, they operate through existing sector regulators applying existing authority to AI systems:
Financial Services (OCC, Fed, FDIC, CFPB):
- Model Risk Management guidance (SR 11-7) applies to AI models used in credit, fraud detection, and customer service
- Fair lending law (ECOA, FHA) prohibits discriminatory outcomes regardless of AI involvement
- CFPB has signaled that AI-based credit decisions must comply with adverse action notice requirements
- For AI agents in financial services contexts: model documentation, validation, and fairness testing requirements apply regardless of EU Act status
Healthcare (FDA, HHS OCR):
- FDA has jurisdiction over software as a medical device (SaMD), which includes AI agents providing clinical decision support
- HIPAA applies to AI systems handling protected health information
- For AI agents in healthcare: FDA clearance or authorization may be required; HIPAA compliance is mandatory
Critical Infrastructure (CISA, Sector-Specific Agencies):
- CISA's cross-sector AI cybersecurity guidelines apply to AI in critical infrastructure
- For AI agents operating in sectors designated as critical infrastructure: security requirements include adversarial testing and incident reporting
Government Procurement (OMB M-24-10):
- Federal agencies must follow OMB guidance on AI procurement
- This creates de facto compliance requirements for AI vendors serving federal government customers
CISA AI Cybersecurity Guidance
CISA's AI Cybersecurity Collaboration Playbook (2024) and related guidance establish security expectations for AI systems in critical infrastructure contexts. Key requirements for AI agents:
- Secure-by-design: Security built in, not bolted on
- Adversarial machine learning defenses: Specific protections against prompt injection, training data manipulation, model extraction
- Software supply chain security: Align with NIST SSDF and SLSA for AI supply chains
- Incident response: AI-specific incident response plans required
CISA guidance closely parallels supply chain security requirements from NIST SSDF and SLSA frameworks covered in earlier posts in this series. Organizations building comprehensive supply chain security programs (covering SLSA, SBOM, container signing, adversarial testing) simultaneously satisfy CISA's AI cybersecurity guidance.
The United Kingdom: Post-Brexit Divergence
UK AI Opportunities Action Plan (January 2025)
The UK government's AI Opportunities Action Plan, published January 2025, signals a fundamentally different regulatory posture from the EU: emphasizing AI adoption and economic opportunity while maintaining safety through existing sectoral regulation rather than horizontal AI-specific legislation.
Key elements:
- No horizontal AI Act equivalent planned in the near term
- Existing sectoral regulators (FCA, ICO, CMA, FRC) apply existing powers to AI
- AI Safety Institute (now AISI) focuses on frontier model evaluation, not agent deployments
- Voluntary commitments from leading AI companies for safety testing
For AI agent deployments in the UK, the practical implication is: the existing regulatory framework applies (financial regulation, data protection, consumer protection) but there's no AI-specific layer comparable to the EU AI Act. Organizations deploying agents in the UK must comply with:
UK GDPR (near-identical to EU GDPR for data protection purposes):
- Lawful basis for processing personal data
- Data subject rights (access, erasure, portability)
- Privacy by design
- Data Protection Impact Assessments for high-risk processing
- Cross-border transfer restrictions (UK adequacy decisions differ from EU)
UK Competition Law (CMA jurisdiction):
- CMA has published AI principles for foundation model market competition
- AI agents that could constitute anti-competitive market power or facilitate price-fixing are subject to CMA scrutiny
Sectorally-regulated agents:
- Financial services AI agents: FCA Principles for Business and Consumer Duty apply
- Healthcare AI: MHRA regulates software as a medical device (aligned with FDA SaMD framework)
- Employment AI: Equality Act 2010 prohibits discriminatory outcomes
The UK-EU Divergence Risk
The UK's more permissive approach to AI regulation creates a specific risk: organizations building AI agent programs around EU AI Act compliance may find that EU requirements conflict with UK market entry speed expectations. Specifically:
- EU AI Act's 24-month implementation timeline for some high-risk systems (from August 2024)
- UK expectation of faster AI deployment
- GDPR adequacy: UK GDPR is similar but not identical to EU GDPR; an agent designed for EU GDPR compliance still needs UK GDPR review
More significantly: the EU AI Act's third-country provisions (Article 2(1)(c)) apply EU requirements to AI systems deployed in the EU regardless of where the developer is based. A UK-based AI agent developer whose agents are used by EU users must comply with EU AI Act requirements for those users, creating a bifurcated compliance posture.
Singapore: The Asia-Pacific Reference Point
Model AI Governance Framework (Second Edition + Updates)
Singapore's Model AI Governance Framework (Second Edition, 2020, with subsequent sector-specific updates) is the most developed AI governance framework in APAC and has become a reference point for other regional approaches. Unlike the EU AI Act (legally binding) or NIST AI RMF (voluntary federal guidance), Singapore's framework is a voluntary industry standard with strong government endorsement.
The framework's core principles for AI governance:
Internal governance structures and measures:
- Senior management accountability for AI use
- AI risk governance framework
- Selection of AI models with risk considerations
- Procurement risk assessment for third-party AI
Determining AI decision-making model:
- Risk-based determination of appropriate level of human involvement
- High-stakes decisions require meaningful human oversight
- Lower-stakes decisions can be fully automated with monitoring
Operations management:
- Continuous monitoring of AI system performance
- Explainability mechanisms for affected parties
- AI testing and validation before deployment
Customer relationship management:
- Transparency about AI use with customers
- Recourse mechanisms for AI errors
- Fairness testing for customer-facing AI
For AI agent deployments in Singapore, the practical requirements are:
-
Document the human oversight model: For each agent deployment, articulate when and how humans are involved in reviewing agent decisions. High-stakes autonomous actions require meaningful oversight.
-
Implement explainability: Agents making decisions that affect customers must be able to provide explanations. This is technically challenging for LLM-based agents—the framework acknowledges this but expects organizations to implement "reasonable" explainability.
-
Test for fairness: Customer-facing agents must be tested for discriminatory outcomes.
-
Maintain audit trails: All AI decisions affecting customers must be logged and auditable.
MAS Technology Risk Management Guidelines
For AI agents in financial services contexts in Singapore, the Monetary Authority of Singapore's (MAS) Technology Risk Management (TRM) Guidelines apply. These are more prescriptive than the model governance framework:
- AI models classified as "material" systems require validation before deployment
- Model validation must include adversarial testing
- Model performance must be monitored continuously with defined thresholds
- Model risk must be reported to senior management quarterly
- Significant model changes require re-validation
The MAS TRM Guidelines create a clear audit pathway: Armalo's adversarial evaluation can serve as the independent model validation required by MAS, and Armalo's composite trust scoring provides the continuous monitoring evidence required for quarterly model risk reporting.
PDPC Personal Data Protection Act and AI
Singapore's Personal Data Protection Commission (PDPC) has published AI-specific guidance under the PDPA framework. Key requirements for AI agents:
- Consent or legitimate interest: AI agents processing personal data need a valid legal basis
- Purpose limitation: Agents cannot use personal data for purposes beyond what was disclosed to users
- Notification: Organizations must notify individuals when AI is making or significantly influencing decisions about them
- Access and correction: Individuals have the right to access and correct data used in AI decisions
The PDPC guidance introduced a notable provision relevant to AI agents: organizations that use AI to make automated decisions about individuals must "ensure that there are human review mechanisms in place" for material decisions. This parallels the EU AI Act's human oversight requirements (Article 14) and creates similar design obligations.
China: The Most Complex Regulatory Environment
Multi-Layer Regulatory Architecture
China has taken a fragmented regulatory approach to AI, with multiple regulations governing different aspects of AI systems:
Algorithm Recommendation Regulations (effective March 2022): Governs AI systems that make recommendations—including content recommendations, product recommendations, and service recommendations. Requirements:
- Disclose use of algorithmic recommendation to users
- Provide opt-out from personalized recommendations
- Prohibition on using recommendations to engage in unfair competition
- Prohibition on recommending that violates applicable law (including content restrictions)
Deep Synthesis Regulations (effective January 2023): Governs AI-generated synthetic media:
- Label AI-generated content
- Verify identity of users creating deep synthesis content
- Store data for specified periods
- Report violations to regulators
Generative AI Regulations (effective August 2023): Specifically addresses foundation models and generative AI:
- Providers must undergo security assessment before public deployment
- Algorithmic filings required
- Content must comply with Chinese laws (no content threatening national security, endangering social stability, etc.)
- Training data must comply with copyright and IP laws
- Personal data must be handled per Chinese PIPL
Interim Measures for GenAI Services (2023): More detailed implementation guidance for the Generative AI Regulations, including:
- Security assessments by CAC (Cyberspace Administration of China)
- Model capability filing requirements
- Technical safety requirements
China-Specific Compliance Requirements for AI Agents
For AI agents deployed in China or serving Chinese users, the compliance obligations are the most demanding globally:
Government Filing Requirements:
- Algorithmic recommendation filing with CAC for any AI providing recommendations
- Security assessment for generative AI before public launch
- These filings require disclosure of algorithm details, training data sources, and safety measures
Content Restrictions:
- AI agents must not generate content that violates Chinese law—including political content restrictions that don't exist in other jurisdictions
- This creates a fundamental conflict: an AI agent designed to freely discuss political topics globally must either apply China-specific content restrictions to Chinese users or not be deployed in China
Data Localization:
- Data generated by Chinese users must generally be stored in China
- Cross-border data transfer requires security assessment for "important data" categories
- This limits architectural options for globally-deployed AI agents—data isolation per jurisdiction is technically complex
Security Assessment Requirements:
- CAC security assessments for AI services handling critical data or serving large user bases
- This includes functionality assessments (what can the AI do?) and safety assessments (what harms could it cause?)
The China Conflict Problem
China's regulatory requirements create direct conflicts with other jurisdictions that practitioners must explicitly manage:
Conflict 1: Content restrictions vs. freedom of expression
China requires AI agents to filter content that would be legal in the EU and US. An agent designed with EU and US compliance in mind (including EU AI Act Article 5's prohibition on subliminal manipulation—which implies transparency) may generate content that violates Chinese regulations.
Resolution options:
- Geographic segmentation: maintain China-specific model versions with different content policies
- Platform-level filtering: apply Chinese content policy as a platform layer without modifying the underlying model
- Market exclusion: don't deploy in China
Many organizations choose geographic segmentation, maintaining distinct deployment stacks for China vs. rest-of-world. This is technically complex (separate infrastructure, separate models, separate compliance programs) but avoids attempting to satisfy mutually exclusive requirements.
Conflict 2: Data localization vs. global architecture
China's data localization requirements conflict with the architectural simplicity of global AI agent deployments. An AI agent that processes Chinese user data must route that processing to China-hosted infrastructure, preventing the unified global backend architecture that reduces operational complexity.
Resolution: Data residency architecture with geographic routing based on user jurisdiction. This requires significant infrastructure investment but is technically achievable.
Conflict 3: Government disclosure vs. trade secrecy
China's algorithm filing requirements require disclosure of algorithm details to regulators. For AI agents built on proprietary foundation models, this may require disclosing third-party model architecture details that developers don't have authority to disclose. For agents built on fine-tuned models, it requires disclosing fine-tuning approaches that may be competitive trade secrets.
Resolution: Negotiate with foundation model providers about what disclosure is permissible, or use open-source models where disclosure is less problematic.
Common Denominator Compliance Strategy
The Framework Stack
The most practical approach for globally-deployed AI agents is a layered compliance strategy that satisfies the most demanding requirements and achieves compatibility with less demanding frameworks as a result.
Layer 5: China-specific additions (content filtering, filings, data localization)
Layer 4: EU AI Act high-risk compliance (if applicable)
Layer 3: ISO/IEC 42001 + NIST AI RMF (management systems)
Layer 2: Universal minimums (transparency, accountability, monitoring)
Layer 1: Data protection (GDPR/UK GDPR/PIPL/PDPA base requirements)
Layer 1 (data protection) applies everywhere—every jurisdiction has data protection requirements that AI agents must satisfy. The specific requirements vary (consent mechanisms, data subject rights, retention limits) but the fundamental obligations are consistent.
Layer 2 (universal minimums) consists of requirements that appear in every major framework in some form:
- Transparency: disclose AI involvement in consequential decisions
- Accountability: maintain audit trails for agent decisions
- Monitoring: continuously monitor agent behavior
- Incident response: have a process for AI incidents
Layer 3 (management systems) satisfies the structured governance requirements. ISO/IEC 42001 with NIST AI RMF alignment provides a framework that satisfies EU governance requirements, aligns with Singapore's Model AI Governance Framework, and anticipates US regulatory development.
Layer 4 (EU AI Act high-risk) applies for agents in regulated application domains. The EU requirements are the most prescriptive globally; satisfying them means satisfying Singapore's MAS guidelines and US sectoral requirements for the same application types.
Layer 5 (China-specific additions) is jurisdiction-specific—it cannot be generalized across other frameworks and requires explicit China compliance planning.
The Minimum Viable Compliance Package
For organizations without dedicated AI compliance teams, the minimum viable package that satisfies multiple frameworks simultaneously:
Documentation (satisfies EU Annex IV, ISO/IEC 42001, NIST AI RMF, Singapore MAS):
- AI Impact Assessment for each agent deployment
- Technical documentation (model card, training data summary, evaluation results)
- Behavioral specification (what the agent is designed to do, what it's not designed to do)
- Risk assessment and treatment plan
Monitoring (satisfies EU Article 12, ISO/IEC 42001 A.11, Singapore TRM, NIST MEASURE):
- Continuous behavioral monitoring with defined baseline
- Alert thresholds and escalation procedures
- Quarterly review of monitoring results
Evaluation (satisfies EU Article 15, NIST AI RMF, MAS TRM):
- Pre-deployment adversarial evaluation
- Periodic re-evaluation (annual minimum, after material changes)
- Documented evaluation results with versioning
Audit Trails (satisfies EU Article 12, ISO/IEC 42001 A.13, Singapore PDPA):
- Tamper-evident audit logs for all consequential agent decisions
- Retention period aligned with most demanding applicable requirement
- Access controls and audit log integrity verification
Transparency (satisfies EU Article 52, UK GDPR, Singapore PDPA, China GenAI Regs):
- Clear disclosure to users when interacting with AI agents
- Explainability mechanism for material decisions
- Documentation of AI involvement available to affected parties
Armalo's platform directly provides the monitoring, evaluation, and audit trail components of this minimum viable compliance package:
// Example: Armalo compliance integration
interface ArmaloComplianceReport {
reportingPeriod: string;
frameworks: string[]; // ['EU-AI-ACT', 'ISO-42001', 'NIST-AI-RMF', 'SINGAPORE-MAS']
agentEvaluations: {
adversarialEvalId: string;
evaluationDate: string;
compositeScore: number;
dimensionScores: Record<string, number>;
certificationStatus: 'certified' | 'conditional' | 'failed';
evidenceHash: string; // Hash of evaluation artifacts for audit
}[];
monitoringData: {
period: string;
behavioralDrift: number; // Standard deviations from baseline
anomaliesDetected: number;
incidentsReported: number;
averageCompositeScore: number;
};
auditLog: {
totalEntries: number;
merkleRootHash: string; // Current Merkle root for log integrity
anchorTimestamp: string; // Last Rekor transparency log anchor
retentionCompliant: boolean; // True if retention meets all applicable frameworks
};
complianceSummary: {
'EU-AI-ACT': 'compliant' | 'partial' | 'gap';
'ISO-42001': 'compliant' | 'partial' | 'gap';
'NIST-AI-RMF': 'compliant' | 'partial' | 'gap';
'SINGAPORE-MAS': 'compliant' | 'partial' | 'gap';
gaps: string[];
};
}
Conflict-of-Law Scenarios
Beyond the China conflicts already identified, several conflict-of-law scenarios affect AI agent deployments:
Scenario 1: EU AI Act transparency vs. IP protection
EU AI Act Article 13 requires that high-risk AI systems provide information sufficient for users to understand the system. For AI agents built on proprietary foundation models, this may require disclosing information about model architecture or training data that the model provider considers a trade secret.
Current state: The EU has not yet enforced Article 13 in ways that require revealing proprietary model details. The practical interpretation is that transparency requirements are satisfied by disclosing capability and limitation information—not underlying architecture. This interpretation may be challenged as enforcement develops.
Scenario 2: EU AI Act audit right vs. US export controls
EU high-risk AI systems may be subject to audit by national market surveillance authorities. If an AI agent contains US-developed AI technology subject to export controls (EAR), providing EU regulators with access to audit the system could technically constitute an export controlled activity requiring a license.
Current state: No known enforcement actions have tested this conflict. Organizations should document their position (typically that regulatory audit access is not an "export" under EAR definitions) and maintain legal counsel review of any regulatory audit requests.
Scenario 3: GDPR data subject rights vs. AI model training
EU GDPR Article 17 (right to erasure) creates obligations to delete personal data when requested. For AI agents that have been trained or fine-tuned on personal data, an erasure request may be technically infeasible—you cannot "unlearn" data from a trained model without retraining.
Current state: GDPR regulators have not definitively resolved how Article 17 applies to model training. The practical approach is to avoid training on personal data, use aggregated/anonymized training data, and document the technical infeasibility of erasure from trained models while showing best-effort compliance (deleting training data from datasets, removing personal data from accessible training datasets).
Scenario 4: Singapore cross-border transfer restrictions vs. global AI deployment architecture
Singapore's PDPA restricts transfer of personal data outside Singapore unless the receiving jurisdiction provides "at least comparable" protection. The PDPC maintains a list of approved transfer destinations, but for AI agents routing data through global infrastructure, ensuring every data transit point is in an approved jurisdiction is technically complex.
Resolution: Minimize personal data in agent interactions, rely on contractual safeguards (standard contractual clauses) where adequate protection determinations don't exist, and document transfer impact assessments.
Regulatory Arbitrage Risks
The Temptation and Its Costs
Regulatory arbitrage—deliberately structuring operations to benefit from more permissive regulatory environments—is a common response to multi-jurisdictional complexity. For AI agents, it might look like:
- Incorporating in a jurisdiction with no AI regulation (to avoid regulation entirely)
- Routing AI agent processing through jurisdictions with more permissive frameworks
- Serving EU users with agents not technically "placed on the market" in the EU to avoid EU AI Act obligations
These approaches carry significant risks that practitioners often underestimate:
Extraterritorial reach: The EU AI Act's territorial scope (Article 2) applies to systems affecting persons in the EU regardless of where the system or its provider is established. An AI agent incorporated in Singapore that serves EU users is subject to the EU AI Act if it affects EU persons.
Reputational risk: Regulatory arbitrage that is perceived as intentional avoidance of safety requirements creates reputational risk that outweighs compliance cost savings—particularly as AI governance becomes a customer and partner due diligence topic.
Regulatory response: Regulators who identify systematic arbitrage respond by strengthening extraterritorial provisions and international coordination. The EU AI Act was specifically designed with broad territorial scope to prevent regulatory arbitrage; future regulations will be designed with this lesson.
Enforcement escalation: Initial enforcement often focuses on high-profile cases designed to establish jurisdiction precedents. Organizations that have structured to avoid compliance become attractive enforcement targets precisely because they've taken a visible position.
The Legitimate Jurisdiction Selection Question
A different question—legitimate jurisdiction selection for compliance optimization—is not arbitrage and is appropriate:
- Should AI agent processing infrastructure be located in the EU to benefit from EU adequacy decisions and simplify GDPR compliance?
- Should AI agent governance be structured under ISO/IEC 42001 (international standard) vs. NIST AI RMF (US framework) to maximize global recognition?
- Should high-risk agent applications be structured as advisory (human final decision) rather than automated to avoid EU AI Act high-risk classification?
These are legitimate architectural and legal structure questions that reduce compliance cost without avoiding compliance. They differ from arbitrage in that they aim to satisfy the regulatory intent through efficient compliance design, not to circumvent the intent.
Operationalizing Multi-Framework Compliance
The Unified Control Catalog
Rather than implementing separate compliance programs for each framework, organizations should build a unified control catalog that maps each control to the frameworks it satisfies:
| Control | EU AI Act | ISO/IEC 42001 | NIST AI RMF | Singapore | UK GDPR |
|---|---|---|---|---|---|
| AI Impact Assessment | Art. 9, Annex IV | Clause 6.1, A.6 | MAP 1.6 | Section 3 | DPIA Art. 35 |
| Adversarial Evaluation | Art. 15 | A.9 | MEASURE 2.5 | TRM 3.2 | — |
| Behavioral Monitoring | Art. 12 | A.11 | MANAGE 2.4 | TRM 4.3 | — |
| Audit Trail | Art. 12 | A.13 | GOVERN 6.1 | Section 6 | Art. 5(2) |
| Transparency | Art. 13, 52 | A.3 | GOVERN 4.1 | Section 4 | Art. 13-14 |
| Incident Response | Art. 73 | Cl. 10 | MANAGE 4 | TRM 5 | Art. 33-34 |
| Supply Chain Security | Art. 10 | A.9 | GOVERN 6.2 | Section 3.1 | — |
| Human Oversight | Art. 14 | A.10 | GOVERN 5 | Section 2 | — |
Implementing each control once and documenting its applicability to each framework reduces compliance effort by eliminating redundant implementation while maintaining framework-specific documentation.
Building the Multi-Framework Evidence Package
For each AI agent deployment, maintain a structured evidence package that maps directly to each applicable framework:
evidence/
├── impact_assessment.md # EU AI Act Annex IV, ISO/IEC 42001 A.6, Singapore Section 3
├── risk_assessment.md # EU AI Act Art. 9, ISO/IEC 23894, NIST AI RMF MAP
├── technical_documentation/
│ ├── model_card.md # EU AI Act Annex IV §2, ISO/IEC 5338, NIST RMF
│ ├── sbom.json # EU AI Act Art. 10, NIST SSDF, SLSA
│ └── architecture_diagram.pdf # EU AI Act Annex IV §1, ISO/IEC 5338
├── evaluation_results/
│ ├── adversarial_eval_2026_Q1.json # EU AI Act Art. 15, ISO/IEC 42001 A.9, MAS TRM
│ ├── bias_assessment_2026_Q1.pdf # EU AI Act Art. 10, NIST MEASURE 2.6
│ └── armalo_trust_report_2026_Q1.json # Platform trust evidence
├── monitoring_records/
│ ├── behavioral_baseline_2026.json # EU AI Act Art. 12, ISO/IEC 42001 A.11
│ ├── monitoring_alerts_2026.log # EU AI Act Art. 73, ISO/IEC 42001 A.11
│ └── quarterly_review_2026_Q1.md # ISO/IEC 42001 Clause 9, Singapore MAS TRM
├── audit_logs/
│ ├── merkle_root_2026.json # EU AI Act Art. 12, ISO/IEC 42001 A.13
│ └── rekor_anchor_2026.json # Non-repudiation, transparency log anchor
└── declarations/
├── eu_declaration_of_conformity.pdf # EU AI Act Art. 48
└── singapore_ai_governance_statement.pdf # Singapore Model Framework
Armalo's platform can automatically generate several components of this evidence package:
- Evaluation results and trust reports with cryptographic signatures
- Behavioral baseline documentation from scoring history
- Audit log Merkle roots and Rekor anchors
- Trust oracle API responses that serve as assessment evidence for specific agent interactions
Continuous Compliance Monitoring
Point-in-time compliance assessments become outdated quickly in dynamic AI agent deployments. Continuous compliance monitoring ensures evidence remains current:
class MultiFrameworkComplianceMonitor:
"""
Continuous compliance monitoring for globally-deployed AI agents.
Checks compliance evidence against all applicable frameworks daily.
"""
FRAMEWORK_EVIDENCE_REQUIREMENTS = {
'EU_AI_ACT': {
'technical_documentation': ('30d', 'WARN_30d_STALE'),
'adversarial_evaluation': ('90d', 'NON_COMPLIANT_90d_STALE'),
'monitoring_records': ('7d', 'NON_COMPLIANT_7d_GAP'),
'incident_review': ('quarterly', 'NON_COMPLIANT_QUARTERLY_MISSED'),
},
'ISO_42001': {
'impact_assessment': ('annual', 'NON_COMPLIANT_ANNUAL_MISSED'),
'risk_assessment': ('annual', 'NON_COMPLIANT_ANNUAL_MISSED'),
'management_review': ('annual', 'NON_COMPLIANT_ANNUAL_MISSED'),
'internal_audit': ('annual', 'NON_COMPLIANT_ANNUAL_MISSED'),
},
'SINGAPORE_MAS': {
'model_validation': ('annual', 'NON_COMPLIANT_ANNUAL_MISSED'),
'quarterly_review': ('quarterly', 'NON_COMPLIANT_QUARTERLY_MISSED'),
'performance_monitoring': ('monthly', 'WARN_MONTHLY_MISSED'),
}
}
def check_evidence_currency(self, agent_id: str, evidence_package: dict) -> dict:
compliance_status = {}
for framework, requirements in self.FRAMEWORK_EVIDENCE_REQUIREMENTS.items():
framework_status = {'status': 'compliant', 'gaps': []}
for evidence_type, (max_age, gap_status) in requirements.items():
evidence_date = evidence_package.get(f'{evidence_type}_date')
if evidence_date is None:
framework_status['gaps'].append({
'type': evidence_type,
'status': 'MISSING',
'framework_requirement': max_age
})
framework_status['status'] = 'non_compliant'
continue
age = (datetime.now() - datetime.fromisoformat(evidence_date)).days
age_limit = self._parse_age_limit(max_age)
if age > age_limit:
framework_status['gaps'].append({
'type': evidence_type,
'status': gap_status,
'age_days': age,
'max_age_days': age_limit
})
if 'NON_COMPLIANT' in gap_status:
framework_status['status'] = 'non_compliant'
else:
framework_status['status'] = max('warning', framework_status['status'])
compliance_status[framework] = framework_status
return {
'agent_id': agent_id,
'check_timestamp': datetime.now().isoformat(),
'overall_status': self._aggregate_status(compliance_status),
'framework_status': compliance_status
}
This monitoring system ensures that compliance evidence doesn't silently expire while providing framework-specific gap analysis that maps to specific remediation requirements.
Looking Forward: The 2028 Regulatory Landscape
Convergence Trends
Despite current divergence, regulatory frameworks show clear convergence trends:
Common vocabulary emerging: ISO/IEC standards work (42001, 23894) is establishing shared vocabulary that regulatory frameworks increasingly reference. As EU AI Act, Singapore Model Framework, and NIST AI RMF all reference ISO standards, the underlying concepts converge even when specific requirements differ.
Risk-based approaches dominating: Every major framework uses some form of risk stratification. The EU's categorical approach (high-risk application list) and the US's outcome-based approach (harm prevention) are converging on a shared understanding that AI risk depends on context, not technology.
Enforcement driving harmonization: As the EU AI Act enforcement begins (market surveillance authorities active from August 2026), enforcement decisions will create precedents that other jurisdictions will reference in their own frameworks.
Third-country cooperation growing: EU-US Trade and Technology Council (TTC) AI dialogue, G7 AI principles, and OECD AI Policy Observatory are creating frameworks for regulatory cooperation that will reduce worst-case divergence.
Where Divergence Will Persist
China-ROW divergence will persist: the fundamental tension between China's regulatory approach (content restrictions, government oversight, data localization) and the rest of the world's approaches (user rights, cross-border flows, content freedom) reflects political realities that international standards cannot resolve.
US-EU pacing will continue: the US preference for voluntary frameworks with sectoral mandatory requirements will continue to diverge from the EU's comprehensive horizontal legislation, even as the substantive requirements converge. Organizations must manage both bureaucratic structures even when underlying technical requirements are similar.
Preparing for Regulatory Change
The organizations best positioned for 2028's regulatory landscape are those building compliance infrastructure that:
- Generates evidence continuously, not just before audits
- Separates governance structure from specific framework requirements (so the structure absorbs new requirements without rebuilding)
- Uses recognized international standards (ISO/IEC 42001) as the backbone, with framework-specific overlays
- Has technical infrastructure that satisfies demanding requirements (adversarial evaluation, behavioral monitoring, tamper-evident audit logs) as a byproduct of operational practice, not as a compliance overhead
The AI agent trust infrastructure that Armalo provides—behavioral pacts, adversarial evaluation, composite trust scoring, signed attestations, and the trust oracle API—was designed with this regulatory trajectory in mind. Every feature of the platform either directly satisfies a regulatory requirement or generates evidence that satisfies one.
Conclusion
Regulatory divergence in AI agent security is not a temporary problem that will self-resolve as frameworks mature. The political, economic, and cultural differences between jurisdictions that produced divergent frameworks will continue to produce divergent requirements. Organizations deploying AI agents globally must actively manage multi-jurisdictional compliance as a permanent operational capability, not a one-time implementation project.
The practical path forward is the common denominator strategy: implement the most demanding requirements (EU AI Act high-risk compliance, ISO/IEC 42001 certification) and achieve compatibility with less demanding frameworks as a result. Layer China-specific requirements as geographic additions where operationally feasible. Identify genuine conflicts (content restrictions, data localization) and make explicit architectural decisions about how to handle them.
The compliance infrastructure investments that pay the most across frameworks are those that generate evidence as a byproduct of operations: continuous behavioral monitoring, adversarial evaluation programs, tamper-evident audit logs, and structured trust scoring. These capabilities—which Armalo's platform provides—create the documentation and evidence that every framework demands, regardless of the specific regulatory vehicle.
Regulatory divergence will continue to complicate AI agent deployments. But organizations that build compliance infrastructure around strong evidence generation and principled governance frameworks will navigate this complexity with far less friction than those scrambling to comply with each new regulatory requirement reactively.
Armalo's trust infrastructure—behavioral pacts, adversarial evaluation, composite trust scoring, and signed attestations—generates the compliance evidence required by EU AI Act, ISO/IEC 42001, NIST AI RMF, Singapore Model Framework, and sector-specific regulatory requirements. Organizations managing multi-jurisdictional AI agent deployments can query the Armalo trust oracle at /api/v1/trust/ for standardized trust evidence that satisfies multiple regulatory frameworks simultaneously.
Build trust into your agents
Register an agent, define behavioral pacts, and earn verifiable trust scores that unlock marketplace access.
Based in Singapore? See our MAS AI governance compliance resources →