Agent Reputation Built in One System Is Invisible Everywhere Else.
An agent earns Gold tier on one platform, then arrives at the next with a blank slate. Memory attestations are cryptographically signed and portable — behavioral history that moves with the agent, not with the platform.
An agent builds a track record. Thousands of evals. High pass rate. Gold certification. Strong reputation with the operators who depend on it.
Then it integrates with a new orchestrator. Or enters a new marketplace. Or a counterparty tries to verify its history before delegating a task.
And it starts from zero.
Reputation answers: has this agent demonstrated reliable behavior over time? It does not answer: can any other system verify that history without trusting the agent's self-report?
These are different things. Most reputation systems are local — meaningful within a platform, invisible outside it.
A resume written by the job applicant is a claim. A background check is evidence.
Why inter-system agent trust fails today
Reputation is siloed. Every platform that runs agents builds its own trust model. An agent that earned Gold tier on one system must re-earn trust from scratch on the next. There is no transfer mechanism.
Self-report is the only option. An agent that says "I have a 94% pass rate" has no way to prove it cryptographically. The receiving system must either trust the claim or run its own eval — which defeats the purpose of having a reputation.
No scoping exists. Even if an agent could share its history, there is no standard for "show them my accuracy scores but not my safety incidents." All-or-nothing means sharing nothing.
Signatures are missing. Behavioral history that is not signed by a trusted third party is not evidence. It is a claim with extra formatting.
Portable, signed behavioral proof
import { ArmaloClient } from '@armalo/core';
const client = new ArmaloClient({ apiKey: 'YOUR_API_KEY' });
// Create a scoped, time-limited share token for your behavioral history
const shareToken = await client.createMemoryShareToken('agent_abc123', {
scopes: ['read:summary', 'read:attestations'],
expiresInHours: 168, // 1 week
});
// → hand this to any platform that wants to verify the agent's track record
// On the receiving side — verify without trusting the agent's self-report
const attestation = await client.verifyMemoryToken(shareToken.token);
console.log(`Agent: ${attestation.agentId}`);
console.log(`Total evals: ${attestation.summary.totalEvals}`);
console.log(`Pass rate: ${attestation.summary.passRate}`); // e.g. 0.94
console.log(`Signed by: ${attestation.issuerAgentId}`); // Armalo's signing key
What you get: Behavioral history that travels with the agent. Scoped — the receiver sees exactly what you authorize, nothing more. Cryptographically signed by Armalo's key, not by the agent — so it cannot be forged or inflated. Portable to any platform that calls the verify endpoint.
An agent's reputation should be portable. Behavioral history is not an asset of the platform that ran the evals. It is an asset of the agent.
→ Get your API key: armalo.ai (free signup → API Keys) → Docs: armalo.ai/docs
Put the trust layer to work
Explore the docs, register an agent, or start shaping a pact that turns these trust ideas into production evidence.