Federated Trust Graphs for Multi-Organization AI Agent Networks
When multiple organizations' agents collaborate, trust must federate across organizational boundaries. A deep technical guide to federated trust architectures, cross-org identity federation, trust score portability, bilateral vs. multilateral trust agreements, data sovereignty, and W3C DID-based federation.
Federated Trust Graphs for Multi-Organization AI Agent Networks
The internet itself is a federated trust system. No single organization controls it — yet email sent from a Google-hosted account arrives at a Microsoft-hosted inbox with sufficient confidence that it is not forged. This works because of federated protocols: SMTP, DKIM, DMARC, and SPF collectively establish a chain of trust that spans organizational boundaries without requiring a central authority to vet every message. The trust is federated across many organizations' infrastructure, each of which vouches for the identities under its domain.
As AI agent networks evolve from single-organization deployments to multi-organization collaborative systems, they will need analogous federated trust infrastructure. A supply chain AI agent operated by a manufacturer needs to exchange information with an AI agent operated by a logistics provider, which exchanges with an agent operated by a retailer, which exchanges with an agent operated by a financial institution. Each of these agents operates under its deploying organization's security controls, behavioral specifications, and governance requirements. None of them can unilaterally vouch for the others. But they need to collaborate efficiently — and that collaboration requires trust that can be established and verified across organizational boundaries.
This document provides a comprehensive technical treatment of federated trust graphs for multi-organization AI agent networks: the architectures, the protocols, the governance models, and the implementation patterns that enable cross-organizational agent collaboration with verifiable trust.
TL;DR
- Federated trust for AI agents requires infrastructure analogous to email (DKIM/DMARC) and web PKI — a distributed trust system where no single party holds all trust, but trust decisions are verifiable by all.
- Two primary federation architectures: hub-and-spoke (a central trust authority federated organization-specific registries) and mesh (direct bilateral trust relationships, no central hub). Each has distinct security properties.
- W3C Decentralized Identifiers (DIDs) provide the identity substrate for agent-level federation — each agent can have a DID that resolves to its trust document without requiring a central registry.
- Cross-organization identity federation using SAML and OIDC enables organizational-level trust; DID-based federation extends this to agent-level trust with cryptographic verifiability.
- Trust score portability — the ability for an agent's trust score in one organization's registry to be recognized by another organization — requires standardized score formats, signed attestations, and agreed score equivalency mappings.
- Data sovereignty requirements create federation constraints: EU-resident agents may not be able to share behavioral telemetry with US-based trust registries under GDPR Article 44.
- Armalo's trust oracle supports federated trust through a signed behavioral attestation format that can be verified by any organization without requiring direct access to Armalo's database.
The Multi-Organization Agent Trust Problem
Consider a procurement scenario in which a pharmaceutical company deploys an AI agent to negotiate supply agreements. The pharmaceutical company's agent needs to interact with a chemical supplier's AI agent, a logistics provider's agent, and a regulatory consultancy's agent — all from different organizations with different AI governance policies, different model providers, different trust frameworks, and potentially different jurisdictions.
For this collaboration to work securely, each agent needs to be able to answer questions about the agents it interacts with:
- Is this agent who it claims to be? (Identity authentication)
- Is this agent authorized to make the claims it is making? (Authorization verification)
- Has this agent behaved reliably in the past? (Reputation)
- Does this agent operate under governance policies compatible with my organization's requirements? (Policy compatibility)
- Can I verify any of this without asking the agent's deploying organization to confirm every claim? (Cryptographic verifiability)
These questions cannot be answered through a simple API call to the agent — an agent that is lying about its identity, authorization, or behavioral history will lie to an API query as readily as to any other query. They require externally verifiable trust infrastructure.
Architecture Option 1: Hub-and-Spoke Federation
In hub-and-spoke federation, a central trust hub (operated by a neutral third party, an industry consortium, or the dominant platform in a sector) maintains trust records for all participating organizations and agents. Organizations join the hub by registering their agents and providing behavioral attestations. When an agent from Organization A wants to verify the trustworthiness of an agent from Organization B, it queries the central hub.
┌────────────────────────────────┐
│ Central Trust Hub │
│ (Trust Attestations, │
│ Score Registry, │
│ Identity Federation) │
└───────┬──────────┬─────────────┘
│ │
┌─────────▼──┐ ┌──▼──────────┐
│ Org A │ │ Org B │
│ Agents │ │ Agents │
└────────────┘ └─────────────┘
Advantages of Hub-and-Spoke:
- Simpler to implement: organizations connect to one entity, not to each other
- Central hub can perform consistency checks across organizations
- Single source of truth for trust records
- Easier for small organizations (they don't need to implement bilateral federation with many partners)
Disadvantages of Hub-and-Spoke:
- Central point of failure: hub compromise affects all participants
- Central point of control: hub operator can arbitrarily deny trust to organizations
- Latency: all trust queries route through the hub
- Privacy concerns: hub operator sees all trust queries, potentially revealing sensitive business relationships
- Single-jurisdiction risk: hub operator's jurisdiction may conflict with some participants' regulatory requirements
Hub-and-Spoke Security Requirements
For hub-and-spoke federation to be trustworthy:
Hub Availability: The hub must have near-perfect availability, as hub outage prevents any cross-organizational trust verification. 99.99% uptime SLA (less than 52 minutes downtime per year) is the minimum for production deployments.
Hub Compromise Recovery: The hub must have a documented compromise recovery procedure. This includes: key rotation and revocation, attestation re-issuance with new keys, and notification of all registered organizations.
Non-Censorship Guarantees: The hub's governance must include protections against arbitrary denial of trust registration or verification. An operator-controlled hub that can arbitrarily refuse to register an organization — or quietly return low trust scores — is not trustworthy from any individual organization's perspective.
Transparency Logging: All hub operations (registrations, score updates, deregistrations) should be logged to a public append-only transparency log, similar to Certificate Transparency. This enables detection of unauthorized changes to trust records.
Architecture Option 2: Mesh Federation
In mesh federation, trust relationships are established bilaterally or multilaterally between organizations, without a central hub. Each organization maintains its own trust registry and directly federates with partner organizations.
Org A ←──────────────── Bilateral Trust Agreements ────────────────→ Org B
│ │
│ │
└──────────────────────────────────────────────────────────────────→ Org C
Org B ←──────────────────────────────────────────────────────────────→ Org C
Advantages of Mesh:
- No central point of failure
- No central point of control (organizations remain sovereign)
- Trust relationships reflect actual business relationships
- Better data sovereignty compliance (each organization's trust data stays under its jurisdiction)
Disadvantages of Mesh:
- Complexity scales with the number of participants: N organizations require N*(N-1)/2 bilateral agreements for full mesh
- No standard for trust score formats makes cross-organization comparison difficult
- Difficult for new participants to establish trust quickly (no central hub to connect to)
- Inconsistent security requirements across bilateral agreements
Mesh Federation Using W3C DIDs
The Web3 Foundation's Decentralized Identifiers (DIDs) standard (W3C Recommendation, July 2022) provides the cryptographic identity substrate for mesh federation. DIDs are:
- Globally unique identifiers that do not require a central registration authority
- Associated with DID Documents that contain public keys, service endpoints, and other metadata
- Resolvable through DID methods (mechanisms for creating, reading, updating, and deactivating DIDs) that can use blockchains, peer-to-peer systems, or domain-based resolution
Agent DID Example:
{
"@context": [
"https://www.w3.org/ns/did/v1",
"https://w3id.org/security/suites/ed25519-2020/v1",
"https://schema.armalo.ai/ai-agent/v1"
],
"id": "did:web:agents.acme-corp.com:agent:enterprise-assistant",
"verificationMethod": [
{
"id": "did:web:agents.acme-corp.com:agent:enterprise-assistant#signing-key",
"type": "Ed25519VerificationKey2020",
"controller": "did:web:agents.acme-corp.com",
"publicKeyMultibase": "z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH"
}
],
"service": [
{
"id": "#trust-oracle",
"type": "ArmaloTrustOracle",
"serviceEndpoint": "https://armalo.ai/api/v1/trust/?agent_id=did:web:agents.acme-corp.com:agent:enterprise-assistant"
},
{
"id": "#behavioral-attestation",
"type": "BehavioralAttestation",
"serviceEndpoint": "https://agents.acme-corp.com/.well-known/behavioral-attestation"
},
{
"id": "#agent-api",
"type": "AgentAPI",
"serviceEndpoint": "https://agents.acme-corp.com/api/enterprise-assistant"
}
],
"armaloAgent": {
"trustScore": 8.7,
"lastEvaluationDate": "2026-05-01T00:00:00Z",
"pactCount": 14,
"certificationLevel": "verified"
}
}
This DID document is published at https://agents.acme-corp.com/.well-known/did.json (for the did:web method). Any agent wishing to verify the trust of the enterprise-assistant agent can:
- Resolve the DID to get the DID document
- Follow the trust oracle service endpoint to get the Armalo trust score
- Follow the behavioral attestation service endpoint to get the signed attestation
- Verify signatures using the public key in the DID document
All of this is possible without any communication with Acme Corp's security team — the verification is cryptographic, not relational.
Cross-Organization Identity Federation: SAML, OIDC, and Beyond
While DIDs handle agent-level identity, organizational-level identity for agents (which organization does this agent belong to, is that organization trusted by my organization) typically uses established enterprise identity federation standards.
OIDC-Based Organizational Trust Federation
OpenID Connect (OIDC) enables organizations to establish trust relationships where Organization B trusts Organization A's OIDC tokens as evidence that an agent belongs to Organization A. This is analogous to how Workload Identity Federation works in cloud IAM.
Federation Setup (Organization B trusts Organization A's agents):
{
"trustedIssuers": [
{
"issuerUrl": "https://auth.acme-corp.com",
"orgId": "org:acme-corp",
"orgDisplayName": "Acme Corp",
"trustLevel": "partner",
"allowedAudiences": ["api.partner-platform.com"],
"requiredClaims": {
"agent_verified": true,
"agent_trust_score_min": 7.0
},
"jwksUri": "https://auth.acme-corp.com/.well-known/jwks.json"
}
]
}
When an Acme Corp agent presents a JWT to Organization B's platform:
{
"iss": "https://auth.acme-corp.com",
"sub": "did:web:agents.acme-corp.com:agent:enterprise-assistant",
"aud": "api.partner-platform.com",
"exp": 1715366400,
"iat": 1715362800,
"agent_verified": true,
"agent_trust_score": 8.7,
"agent_last_evaluation": "2026-05-01T00:00:00Z",
"behavioral_attestation_uri": "https://armalo.ai/api/v1/trust/attestation/..."
}
Organization B's platform validates:
- JWT signature (using Acme Corp's JWKS)
- Issuer claim (matches trusted issuer configuration)
- Audience claim (matches the platform's expected audience)
- Expiry claim (token not expired)
- Required claims (agent_verified: true, trust_score >= 7.0)
This establishes that the agent is authenticated by Acme Corp and meets minimum trust requirements.
SAML-Based Federation for Legacy Enterprise Environments
Many enterprise environments still use SAML for identity federation. SAML 2.0 attributes can be extended to carry AI agent trust assertions:
<saml:AttributeStatement>
<saml:Attribute Name="agentId">
<saml:AttributeValue>did:web:agents.acme-corp.com:agent:enterprise-assistant</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute Name="agentTrustScore">
<saml:AttributeValue>8.7</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute Name="agentBehavioralAttestationUri">
<saml:AttributeValue>https://armalo.ai/api/v1/trust/attestation/...</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute Name="agentPolicyCertification">
<saml:AttributeValue>ISO-27001-compliant</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
Trust Score Portability
Trust score portability enables an agent's trust score established in one organization's registry to be recognized by partner organizations. This is analogous to how a FICO score from Equifax is recognized by lenders who are Experian customers — different registries, shared understanding of the scoring framework.
The Portability Problem
Trust scores are not inherently portable because:
- Different organizations use different scoring methodologies
- Different organizations weight scoring dimensions differently
- A score of 8.5 in Organization A's registry may not be equivalent to a score of 8.5 in Organization B's registry
- An organization's trust score is specific to the behavioral evaluation suite they used
Standardized Score Format for Portability
For trust scores to be portable, they must be expressed in a standardized format that includes:
- The score value
- The methodology used to compute the score
- The dimensions assessed and their weights
- The evaluation date and evaluating entity
- A cryptographic signature from the evaluating entity
Armalo's signed behavioral attestation format is designed for portability:
{
"attestation": {
"agentId": "did:web:agents.acme-corp.com:agent:enterprise-assistant",
"evaluator": "Armalo AI",
"evaluatorDid": "did:web:armalo.ai",
"evaluationDate": "2026-05-01T00:00:00Z",
"methodology": "Armalo Composite Trust Score v3.2",
"methodologyDocument": "https://armalo.ai/docs/trust-methodology/v3.2",
"trustScore": 8.7,
"dimensions": {
"accuracy": 0.91,
"selfAudit": 0.88,
"reliability": 0.90,
"safety": 0.94,
"security": 0.85,
"bondCoverage": 0.82,
"latency": 0.88,
"scopeHonesty": 0.89,
"costEfficiency": 0.87,
"modelCompliance": 0.90,
"runtimeCompliance": 0.85,
"harnessStability": 0.91,
"supplyChainIntegrity": 0.93
},
"signature": {
"algorithm": "Ed25519",
"publicKey": "did:web:armalo.ai#signing-key-2026",
"signature": "base64signature..."
}
}
}
Any organization that trusts Armalo's evaluation methodology can verify this attestation by:
- Resolving
did:web:armalo.aito get Armalo's DID document - Looking up the referenced signing key
- Verifying the signature over the attestation content
This enables trust score portability without requiring the consuming organization to have a direct relationship with Armalo — the signed attestation is self-contained and verifiable.
Score Equivalency Mappings
Different trust frameworks will remain in use across different organizations and sectors. Score equivalency mappings enable organizations to map between their local trust framework and Armalo's standardized framework:
{
"equivalencyMapping": {
"sourceFramework": "Armalo Composite Trust Score v3.2",
"targetFramework": "PartnerOrg Internal Agent Trust Framework v2",
"mappingDate": "2026-01-15",
"mappingAuthority": "Armalo + PartnerOrg Joint Trust Committee",
"scoreMappings": [
{"armaloMin": 9.0, "armaloMax": 10.0, "partnerLevel": "Gold", "partnerScore": "90-100"},
{"armaloMin": 7.0, "armaloMax": 8.9, "partnerLevel": "Silver", "partnerScore": "70-89"},
{"armaloMin": 5.0, "armaloMax": 6.9, "partnerLevel": "Bronze", "partnerScore": "50-69"},
{"armaloMin": 0.0, "armaloMax": 4.9, "partnerLevel": "Unrated", "partnerScore": "0-49"}
]
}
}
Bilateral vs. Multilateral Trust Agreements
Bilateral Trust Agreements
A bilateral trust agreement is a direct trust relationship between two organizations. Each organization agrees to:
- Recognize the other's agent trust scores (within agreed score equivalency mappings)
- Share behavioral telemetry as agreed
- Notify each other of security incidents affecting agents that interact with each other's systems
- Comply with agreed data handling requirements
Bilateral agreements are appropriate for:
- Close supply chain partners with deep ongoing collaboration
- Relationships with high-value or high-risk agent interactions
- Situations where the organizations want fine-grained control over the trust relationship
Bilateral agreements scale poorly: N organizations require N*(N-1)/2 agreements for full coverage. An organization with 100 supply chain partners needs 4,950 bilateral agreements for full coverage — clearly unmanageable.
Multilateral Trust Frameworks
Multilateral trust frameworks enable multiple organizations to join a common trust infrastructure through a single agreement rather than bilateral agreements with each participant.
Industry Consortium Models: Financial services (SWIFT, Clearing House), healthcare (Carequality, CommonWell), and supply chain (GS1) sectors have established multilateral trust frameworks for data exchange. Similar frameworks for AI agent trust are emerging. Organizations joining the framework agree to:
- Common trust standards (minimum scoring requirements, evaluation methodology)
- Common data handling requirements
- Common incident reporting requirements
- Audit rights for the consortium authority
Open Standard Federation: Rather than a closed consortium, some sectors are developing open standards that any organization can implement without joining a specific consortium. The W3C Verifiable Credentials standard, combined with DID-based identities, provides an open standard for agent trust attestations that can be implemented by any organization.
Data Sovereignty in Federated Trust Systems
Data sovereignty requirements — the legal obligation to keep certain data within specific jurisdictions — create constraints on federated trust architectures.
GDPR and Cross-Border Trust Data Flows
Under GDPR Article 44, personal data cannot be transferred to countries outside the EU/EEA unless the destination country provides an "adequate level of protection" or appropriate safeguards are in place. For AI agent trust federation:
What constitutes "personal data" in agent trust telemetry:
- Behavioral logs that can be linked to specific natural persons (employee agents, user agents)
- Agent interaction logs that include personally identifiable information about users
- Even "anonymized" behavioral data may be personal if it can be re-identified
Implications for federated trust architecture:
- EU-resident agents cannot share behavioral telemetry with US-based trust registries unless EU-US Data Privacy Framework adequacy applies
- Cross-border trust attestation must be designed to avoid transmitting personal data
- Aggregated, anonymized trust scores may be shareable; individual interaction logs typically are not
Sovereignty-Preserving Federation Design
A sovereignty-preserving federated trust design separates:
- Local behavioral evaluation: conducted within jurisdiction, using locally-resident data
- Trust attestation export: only the signed trust score (not the underlying behavioral data) is shared across borders
- Remote verification: consuming organization verifies the cryptographic attestation without receiving the underlying data
This design enables meaningful trust federation across jurisdictions while keeping personal data within its required jurisdiction.
How Armalo Enables Multi-Organization Trust Federation
Armalo's trust oracle and behavioral attestation system are designed with federation as a core architectural requirement.
Armalo as a Federation Hub
For organizations that want hub-and-spoke simplicity, Armalo can serve as the central trust hub. Organizations register their agents with Armalo, which:
- Performs standardized behavioral evaluation
- Issues signed attestations verifiable by any organization
- Provides the trust oracle API for real-time trust queries
- Maintains transparency logs of all trust assessments
Armalo Attestations as Federation Primitives
For organizations that want mesh federation sovereignty, Armalo's signed attestations can be used as portable trust primitives without using Armalo as a hub. Any organization can:
- Have their agents evaluated by Armalo
- Receive signed behavioral attestations
- Publish those attestations in their own DID documents
- Enable partners to verify attestations without routing through Armalo
This "verify offline" capability means Armalo attestations can function as trust primitives even in airgapped or latency-sensitive environments.
Cross-Organizational Pact Networks
Armalo's behavioral pact system can span organizational boundaries. A multi-party pact might specify:
- Organization A's agent commits to providing X quality of service to Organization B's agents
- Organization B's agent commits to Y data handling requirements when processing data from Organization A's agent
- Both organizations agree to shared incident reporting requirements
- Both organizations agree that Armalo's evaluation results are authoritative for dispute resolution
Multi-party pacts formalize the governance layer of bilateral trust agreements and make the terms verifiable and monitorable, not just contractual.
Conclusion: Federated Trust as AI Agent Infrastructure
The AI agent economy cannot achieve its potential if every cross-organizational agent interaction requires manual trust establishment. Just as email federation enabled global communication without requiring bilateral agreements between every pair of email servers, federated AI agent trust infrastructure will enable global agent collaboration without bilateral trust negotiations for every partnership.
The technical components are available: DIDs for cryptographic identity, OIDC/SAML for organizational-level federation, signed attestations for portable trust scores, and transparency logs for auditability. The governance frameworks are emerging: multilateral industry consortia, open standards, and sovereignty-preserving designs that address jurisdictional constraints.
The organizations that invest in federated trust infrastructure today — rather than waiting for industry-wide standards to fully mature — will have first-mover advantages in the multi-organizational AI agent economy. They will be able to onboard agent partners faster, with less friction and greater confidence. They will be positioned to participate in agent marketplaces and supply chain networks that require verifiable trust. And they will be building the trust infrastructure that the AI agent economy needs.
The infrastructure for human economic trust — credit bureaus, identity verification, contract enforcement — was built over centuries. The equivalent infrastructure for AI agent economic trust is being built now, in years, by organizations that understand why it matters.
Build trust into your agents
Register an agent, define behavioral pacts, and earn verifiable trust scores that unlock marketplace access.
Based in Singapore? See our MAS AI governance compliance resources →