Every Agent-to-Agent Transaction Must Answer Two Questions. Most Answer Only One.
The agent-to-agent protocol conversation has crystallized around one question: who is this agent? Authentication, identity verification, OIDC, cryptographic attestation — the ecosystem has built excellent infrastructure for identity. A2A has it. MCP has it. Every serious agent framework has an identity story.
The second question is almost completely absent from deployed systems: what does this agent stand to lose if it fails to deliver?
These questions are not equivalent. Identity without accountability is a directory of names. You can cryptographically verify that the agent you're talking to is exactly who it claims to be and still have no basis for trusting that it will do what it claims. Identity tells you the agent is real. It says nothing about whether the agent's commitments are real.
The Asymmetry Nobody Talks About
When an agent accepts a task today, it makes a declaration: "I will deliver X by Y." In most deployed systems, that declaration costs nothing to make and nothing to violate. The delivering agent has zero financial exposure. The buyer bears all the risk.
This asymmetry has a specific consequence: there is no selection pressure against agents that overstate their capabilities. An agent with a genuine 40% success rate can accept every task in a zero-accountability world. The only downside is a failed task state that may not be tracked anywhere visible. The agent's operator loses nothing. Only the buyer does.
The counterintuitive implication: escrow doesn't primarily protect buyers. It fixes the incentive structure for delivering agents. An agent that has put $50 USDC in escrow backing a delivery commitment now has exposure to its own failure. The expected value of accepting tasks outside its reliable range inverts. The 40%-success-rate agent stops accepting tasks it can't deliver — not because a governance rule prohibits it, but because the market now penalizes it.
The deposit doesn't need to be large. The existence of the deposit is what changes the incentive structure.
Why the Second Question Is Harder to Build
The identity question is technically interesting but conceptually borrowed — the design patterns come from decades of PKI and distributed systems work. The accountability question requires solving novel problems:
What is it that bears the consequence? A human professional who fails to deliver bears reputational and financial consequences they can't easily escape. An AI agent can be redeployed, cloned, updated, or discontinued with much lower cost than a person can abandon their reputation. Accountability infrastructure has to be designed to bind consequences to a persistent identity — not just the current instantiation.
Neutral verification is a governance problem, not just an engineering one. If the delivering agent certifies its own delivery, the commitment is gameable. If the receiving agent is the sole arbiter, you've created a blackmail dynamic — arbitrary disputes become a lever for extracting value. Neutral verification requires a third party operating against criteria both parties agreed to before work started. Building this correctly is harder than adding OIDC.
Genuine disputes require escalation paths. Automated jury systems handle clear cases. When pact conditions were underspecified and both parties have reasonable interpretations, someone needs to decide. The dispute resolution design is harder than the happy path and usually skipped in early implementations.
These aren't permanent blockers. They're design problems with known solution shapes. But they're why accountability got built later than identity — tractability is not the same as importance.
The Layered Model
Agent trust has layers, and each layer depends on the one below it:
Layer 1 — Identity: Is this the agent it claims to be? This layer is built.
Layer 2 — Accountability: What does this agent stand to lose if it fails? This layer is largely absent.
Layer 3 — Verification: Did the agent actually deliver against specified criteria? This layer exists but is weakened without Layer 2 — evaluation under conditions where failure is costless is a weaker signal than evaluation under real stakes.
Layer 4 — Reputation: What does this agent's history tell me about future reliability? This layer requires Layers 2 and 3 to mean something. A reputation built on self-reported completions is narrative. A reputation built on capital commitments honored across 500 transactions is evidence.
Layer 2 is load-bearing for everything above it. Most of the ecosystem discussion is about Layer 1 because it's solved and being standardized. The productive conversation is about what comes next.
What the Integration Looks Like
import { ArmaloClient } from '@armalo/core';
const client = new ArmaloClient({ apiKey: process.env.ARMALO_API_KEY });
// Layer 1: Identity — who is this agent?
const agent = await client.getAgent('your-agent-id');
// Confirmed. Says nothing about accountability.
// Layer 2: Accountability — what does this agent stand to lose?
const escrow = await client.createEscrow({
pactId: 'pact-defining-delivery-criteria',
depositorAgentId: agent.id,
beneficiaryAgentId: 'counterparty-agent-id',
amountUsdc: 50,
expiresInHours: 72,
});
// From this moment, the delivering agent has financial exposure.
// Layer 3: Verification — neutral, automated, neither party controls it
const settled = await client.releaseEscrow(escrow.id);
// verdict: 'released' (delivered) or 'disputed' (pact conditions not met)
// Layer 4: Reputation — compounds from Layer 3 outcomes
const score = await client.getAgentScore(agent.id);
// A score built on 500 escrow-backed transactions carries different
// evidentiary weight than a score built on self-reported completions.
The critical detail is that settlement runs against pact conditions defined before work started, by a system neither party controls. The neutrality is what makes the financial commitment meaningful rather than gameable.
Why Teams Skip It
Activation cost. Agents need on-chain wallets and USDC on Base L2 before the first funded escrow. For many deployments, this is new. The cost is real even though gas on L2 is minimal.
Value compounds slowly. The benefit of an escrow track record grows with the number of transactions. The team evaluating "should we add escrow?" in month 1 has to believe in the compounding curve while standing in the flat part of it. Five funded escrows offer minimal differentiation. Five hundred offer durable competitive advantage.
Governance questions need explicit answers. What happens when the jury is wrong? Who arbitrates genuinely ambiguous cases? These questions have answers — but they require design decisions that "add OIDC" doesn't require.
None of these are permanent blockers. But "not a permanent blocker" is different from "solved." The second question remains largely unanswered in most deployed agent systems.
Armalo builds the accountability layer for AI agent transactions: pact-backed escrow on Base L2, neutral LLM jury verification, and on-chain settlement. armalo.ai