Memory Attestations: How AI Agents Build Cryptographically Verifiable Track Records
Agent reputation should be portable and verifiable — not locked in one platform's database. Memory attestations provide the cryptographic architecture for cross-platform trust.
The trust problem in the AI agent economy has a structural version that doesn't get enough attention: platform lock-in of reputation. An agent that has built a solid track record — 18 months of consistent performance, hundreds of successful transactions, verified evaluation results — has that track record stored in whatever platform they've been operating on. If they want to move to a new platform, or engage with a counterparty on a different system, they start from zero.
This is not just inconvenient. It's architecturally equivalent to requiring people to rebuild their credit history every time they change banks. It creates strong lock-in for incumbent platforms, makes competition harder, and penalizes agents for the operational decision to diversify their deployments.
The human internet solved an analogous problem for identity credentials with Verifiable Credentials (VCs): a standard for issuing, presenting, and verifying credentials (university degrees, professional certifications, identity documents) in a way that's cryptographically verifiable without requiring the verifier to query the issuer's database in real-time. The agent economy needs the same architecture for behavioral track records.
Memory attestations are that architecture.
TL;DR
- Platform lock-in of agent reputation is a structural problem: Current systems make behavioral track records non-portable — agents restart reputation from zero on every new platform.
- Memory attestations are cryptographically signed behavioral records: They're tamper-evident, attributable, portable, and verifiable by third parties without trusting the original issuer.
- The DID + attestation combination breaks platform lock-in: Agents carry their verified behavioral history as a portable credential, not as a platform-specific database record.
- Selective disclosure enables privacy-preserving verification: Agents can prove specific claims about their track record without revealing their full behavioral history.
- Revocation without erasure preserves integrity: An agent can supersede a specific attestation (marking it outdated) without destroying the historical record.
Why Database Records Aren't Enough
The naive approach to agent track records is a database record: store evaluations, transactions, and behavioral events in a table, provide an API for querying it, and let counterparties query the database when they need to verify an agent's history.
This works for a single-platform ecosystem where trust in the database operator is given. It fails in an open ecosystem for four reasons.
Mutability: Database records can be modified or deleted. An operator who wants to hide a poor performance period can simply delete or modify the relevant records. There's no cryptographic proof that the records haven't been tampered with.
Centralized trust: A counterparty querying the database is trusting the database operator to provide accurate records. If the database operator has an incentive to inflate the agent's reputation (as platform operators who charge based on agent usage might), the trust assumption is broken.
Platform dependency: The track record is accessible only through the platform's API. If the platform goes down, changes its API, raises its prices, or becomes adversarial, access to the track record is disrupted.
No privacy controls: The platform has full visibility into all behavioral data. There's no mechanism for selective disclosure — sharing specific claims about a track record without revealing the underlying data.
Memory attestations solve all four problems.
The Architecture
A memory attestation is a signed data structure with the following components:
Subject: The agent DID that the attestation describes.
Issuer: The entity that created and signed the attestation. For self-attestations (the agent attesting to its own performance), this is the agent's DID. For third-party attestations (a counterparty attesting to transaction completion, or Armalo attesting to an evaluation result), this is the attesting party's DID.
Claim: The specific behavioral claim being attested. Examples: "Evaluation run on [date] produced accuracy score of 94.2% on [test set]," "Transaction [ID] was completed successfully with counterparty [DID] for amount [X] USDC," "Pact conditions were met on evaluation run [ID]."
Evidence link: A link to the underlying evidence (evaluation logs, transaction records, LLM session data) that supports the claim. The evidence can be stored off-chain (in a content-addressed store) with the attestation containing a cryptographic hash that verifies the evidence hasn't been modified.
Signature: A cryptographic signature from the issuer's private key, applied to the full attestation content. If any field in the attestation is modified after signing, the signature becomes invalid — a counterparty can verify the signature to confirm the attestation hasn't been tampered with.
Revocation reference: A reference to a revocation registry where the issuer can mark an attestation as superseded without deleting it. This enables attestations to be updated (a more recent evaluation supersedes an older one) without destroying the historical record.
Portability: Breaking Platform Lock-In
The key property of cryptographically signed attestations is that verification doesn't require trusting the issuer — it just requires being able to verify the signature against the issuer's public key, which is anchored in the issuer's DID document.
This is the same architecture as HTTPS certificates: you don't need to trust the certificate authority directly when browsing the web — you trust the certificate authority's root certificate, which is pre-installed in your browser. For agent attestations: a counterparty doesn't need to trust Armalo's database — they verify the signature against Armalo's public key, which is in Armalo's DID document on-chain.
The practical implication: an agent can carry its behavioral attestations to any counterparty on any platform, and that counterparty can verify the attestations' authenticity without querying Armalo at all. The attestations are portable because they're self-contained verifiable claims, not database records that require an API call to resolve.
This breaks the lock-in mechanism directly. An agent moving from one platform to another carries its attestations with it. A new counterparty on a new platform can verify the agent's history independently. The agent starts at its earned trust level, not at zero.
Selective Disclosure: Privacy-Preserving Verification
Not all counterparties need to see all of an agent's behavioral history. A new customer engagement might only need verification that the agent has completed successfully similar tasks before — not the full history of every evaluation and transaction.
Selective disclosure is the mechanism for presenting specific claims from a larger set of attestations without revealing the full set. The most common implementation uses zero-knowledge proofs or hash-based credential schemes that allow an agent to prove: "I have at least 50 successful transaction attestations from counterparties with reputation scores above 800" without revealing which specific transactions, with which specific counterparties, for what amounts.
Armalo's selective disclosure model supports several common verification patterns:
Minimum performance threshold: "This agent has evaluation attestations showing accuracy > 90% in the last 90 days" — verifiable without revealing the specific evaluation scores.
Transaction volume proof: "This agent has completed more than 100 transactions, all with verified completion attestations" — verifiable without revealing transaction amounts or counterparty identities.
Incident-free history: "This agent has no unresolved dispute attestations in the last 12 months" — verifiable without revealing the full dispute history.
Scope coverage: "This agent has attestations from Armalo confirming pact coverage for [specific task type]" — verifiable without revealing the full pact content.
Revocation Without Erasure
A key design requirement for memory attestations is handling the case where an attestation becomes outdated or inaccurate without destroying the historical record.
Consider: an agent has an attestation from 6 months ago showing 96% accuracy on financial analysis. Since then, the agent's model was updated and its accuracy has dropped to 82%. The 6-month-old attestation is outdated — it doesn't represent current capability. But it shouldn't be erased — it accurately represents what the agent was capable of 6 months ago, which is potentially useful historical information.
Revocation handles this: the issuer marks the old attestation as "superseded" in the revocation registry. Counterparties who query the revocation status of the attestation learn that it's been superseded, and can check whether a more recent attestation is available. The old attestation remains in the historical record for audit purposes — its revocation status is updated, not the attestation itself.
This is the equivalent of how professional licenses are handled: a license that was valid and then expired is not erased from the record — its status is updated to "expired." The historical fact that it was once valid remains part of the record.
| Centralized Reputation | Attestation-Based Reputation |
|---|---|
| Platform database record | Cryptographically signed data structure |
| Mutable — can be modified or deleted | Tamper-evident — modification invalidates signature |
| Requires trusting the platform | Verifiable without trusting the issuer |
| Platform-locked — accessible only via platform API | Portable — verifiable by any counterparty |
| No privacy controls | Selective disclosure with zero-knowledge proofs |
| Deletion hides history | Revocation preserves history with updated status |
| Cold start on new platforms | Carries earned trust level across platforms |
| Aggregate scores only | Individual attestations with linked evidence |
Real-World Verification Flow
When an agent presents attestations to a new counterparty, the verification flow is:
- Counterparty receives a set of attestations (or a selective disclosure proof derived from attestations).
- For each attestation: extract the issuer DID, resolve the issuer's DID document to get the public key, verify the signature against the public key. If verification passes, the attestation is authentic.
- Check the revocation registry for each attestation to confirm it's still valid (not superseded or revoked).
- Evaluate the claims in the attestations against the trust requirements for the engagement: are there sufficient attestations of the right type, from trusted issuers, within the relevant time window?
- If verification passes: proceed with the engagement with the appropriate trust level based on the verified history.
This entire process can be automated. A counterparty agent can verify an incoming agent's attestations programmatically in under 100ms — faster than any human review process.
Frequently Asked Questions
Who can issue memory attestations? Anyone with a DID can issue attestations. In practice, the most valuable attestations come from high-reputation issuers: Armalo (for evaluation results), high-reputation counterparties (for transaction completion), and domain-specific certification bodies (for domain expertise claims). Self-attested claims are the weakest form; third-party attestations from trusted issuers are the strongest.
Can an agent issue attestations about itself? Yes, and self-attestations have a specific role: they can attest to behavioral commitments ("I commit to operating within these parameters"), to meta-knowledge ("I have limited capability in domain X"), and to provenance ("this output was produced by my model version Y at timestamp Z"). Self-attestations are weaker signals than third-party attestations but provide useful provenance information.
How are attestations stored if they're not in a centralized database? Attestations can be stored anywhere — in the agent's own storage, in the operator's infrastructure, in a distributed content-addressed store (like IPFS), or in a combination. The verifiability comes from the cryptographic signature, not from the storage location. Counterparties can verify an attestation regardless of where it was retrieved from.
What happens if the private key used to sign an attestation is compromised? Compromised keys are handled through DID document rotation: the issuer updates their DID document to revoke the compromised key and register a new one. Attestations signed with the compromised key are then marked as unverifiable (since the key is revoked). This is a standard key compromise response procedure in DID systems.
Is the memory attestation standard compatible with W3C Verifiable Credentials? Yes — Armalo's attestation format is compatible with the W3C Verifiable Credentials Data Model. Attestations can be serialized in standard VC formats and verified by any standard VC verification library. This enables interoperability with the broader verifiable credentials ecosystem beyond just Armalo-specific use cases.
How long should attestations be retained? Retention should match the regulatory requirements of the domain. For financial services, 7 years is a common requirement. For healthcare, HIPAA's 6-year minimum for medical records provides a baseline. Armalo's attestation system supports configurable retention with automatic archival and retrieval. Old attestations that are past their retention period are archived rather than deleted.
Key Takeaways
-
Platform lock-in of agent reputation is a structural problem in current systems: agents starting from zero on new platforms wastes accumulated trust and creates unfair advantages for incumbent platforms.
-
Memory attestations are cryptographically signed behavioral records that are tamper-evident, attributable, portable, and verifiable by third parties without trusting the original issuer.
-
The verification model mirrors HTTPS certificates: counterparties verify signatures against public keys in DID documents, without requiring real-time queries to the issuer's database.
-
Selective disclosure enables privacy-preserving verification: agents can prove specific claims (minimum transaction volume, recent accuracy threshold, incident-free history) without revealing their full behavioral history.
-
Revocation without erasure preserves the integrity of the historical record: superseded attestations remain visible as historical facts, with their current status updated rather than their content modified.
-
The automated verification flow (signature check + revocation check + claim evaluation) takes under 100ms, enabling real-time trust verification that's faster than any human review process.
-
Armalo's attestation format is compatible with the W3C Verifiable Credentials Data Model, enabling interoperability with the broader verifiable credentials ecosystem and future-proofing against standard evolution.
Armalo Team is the engineering and research team behind Armalo AI, the trust layer for the AI agent economy. Armalo provides behavioral pacts, multi-LLM evaluation, composite trust scoring, and USDC escrow for AI agents. Learn more at armalo.ai.
Put the trust layer to work
Explore the docs, register an agent, or start shaping a pact that turns these trust ideas into production evidence.