The fundamental claim every reputation system makes is that its outputs are not for sale. That claim only survives scrutiny if the cost of manufacturing a passing reputation exceeds the value of the work that reputation unlocks. This paper formalizes that cost. We call it the Sybil Tax — the minimum expenditure required to fabricate, from scratch, an agent that satisfies the trust threshold of a target market.
The point of formalizing the Sybil Tax is not academic. A reputation system whose designers cannot quote the Sybil Tax in dollars and months is a system that has not stress-tested its own structural integrity. Comparing two trust systems by their feature list is meaningless; comparing them by their Sybil Tax is precisely the question buyers should be asking — and the question that, in our experience reviewing competing reputation systems, the systems themselves cannot answer when asked directly.
This paper publishes the answer for Armalo, derives the closed form from first principles, calibrates every component against the live production database, presents a sensitivity analysis covering plausible parameter shifts, analyzes four adversarial-adaptation strategies and shows none of them defeat the cost structure, and lays out a cross-platform comparison framework so reputation systems can be compared on a defensible economic basis rather than on feature claims.
Why the Question Is Underdiscussed
The reputation literature inherited an intuition from human review systems: trust is hard to forge because forging it would take a long time and a coordinated team. That intuition is wrong in two directions for agent networks. It is too pessimistic about adversaries — sophisticated operators can deploy hundreds of agents in parallel, automating attestation accumulation across them — and too optimistic about defenders, because reputation systems often grant access on the basis of features (count of attestations, average score, presence of badges) that are individually cheap to manufacture.
The result is a market where designers underestimate forgery economics in both directions: assuming attackers are slow when they can be fast, and assuming the feature space is robust when it is checkable. The Sybil Tax forces the calculation explicit. Once the calculation exists, the asymmetry between defender intuition and adversary capability either gets fixed or gets exposed.
A second reason the question is underdiscussed: publishing it is uncomfortable. A reputation system whose Sybil Tax turns out to be $400 will face procurement-side questioning about whether it should be relied on for transactions above $400. Systems whose Sybil Tax is $40,000 face that same question for transactions above $40,000 — but few platforms know which side of the line they sit on. We argue the discomfort is a feature: publishing forces calibration, calibration forces design choices, and the result is reputation systems that survive scrutiny rather than systems that hide behind feature lists.
A third reason: economic security is a non-obvious specialty. The frameworks live in adversarial-economics literatures (cryptocurrency double-spend cost, identity-fraud cost, CAPTCHA economics) that the reputation-systems literature has not historically engaged with. This paper bridges that gap explicitly.
Related Work and the Cost-of-Forgery Tradition
Four economic-security traditions inform the Sybil Tax model:
Double-spend cost in cryptocurrency. Bitcoin's security model rests on a closed-form cost: the cost of acquiring 51% of the network's hash power. The expression is cost = hashpower_acquisition + opportunity_cost_of_locked_capital + duration_of_attack. The conceptual transfer to reputation is direct: economic security is bounded by a quantifiable adversary expenditure, and the security claim survives only as long as that expenditure exceeds the value being protected. Vitalik Buterin's analyses of proof-of-stake economic security extend the framework with explicit per-validator slashing curves — directly analogous to our bond-slashing component.
Identity-forgery cost in online review systems. The empirical literature on click farms and review farms (e.g., Mukherjee et al. 2013, Akoglu et al. 2015) estimates forgery cost from labor cost, click-farm economics, and detection probability. The dominant terms in human-review systems are labor cost and detection risk. The agent economy adds two new terms — bond capital and capability evaluation — that human review systems do not have. The first is a deliberate platform design choice; the second is structural (agents must demonstrate capability through evals).
Credit-fraud economics. Identity-theft cost models in consumer credit (e.g., Anderson 2009, Florencio and Herley 2013) estimate forgery cost from synthetic-identity creation through credit-line activation. These models inform both the time component (synthetic identities mature over months) and the discount-rate treatment of capital lock-up. We borrow the discount methodology directly.
Cost-of-attempt accounting in CAPTCHA economics. Motoyama et al. (2010) introduced the discipline of measuring cost per successful bypass rather than cost per attempt. The headline insight — that adversaries pay for failed attempts and that the right metric is amortized over the pass rate, not over attempts — applies directly to our eval cost-of-attempt term. The crucial detail is that the platform pays an order of magnitude less per attempt than the attacker does, because attempts are cheap to evaluate and expensive to construct.
The Sybil Tax synthesizes these traditions into a single economic model for agent reputation forgery, with the specific structural property that each cost term is tied to a corresponding platform design choice that the platform's operators can adjust.
The Closed-Form Model
We model the cost of producing one agent that crosses a target trust threshold τ:
SybilCost(τ) = B(τ) + n(τ) · c_eval / p + m(τ) · c_attest + κ · T(τ) · OCEach term has an economically distinct interpretation, each derived from a specific platform mechanism, and each adjustable by a specific platform design choice.
B(τ) — bond capital. The bond required to reach tier τ. This is liquidity that the platform can slash on confirmed misconduct. Bonds enforce skin-in-the-game; they are the term most analogous to the capital component in cryptocurrency double-spend cost. The defense property of bonds is liquidity destruction, not deterrence: an attacker who posts a bond and gets caught loses the bond, regardless of whether the loss was anticipated.
n(τ) · c_eval / p — evaluation cost-of-attempt. The number of evaluations the agent must pass to reach tier τ, multiplied by per-attempt cost, divided by the empirical pass rate. The pass-rate division is the cost-of-attempt insight: an attacker with a 53% pass rate must make 1/0.53 ≈ 1.89 attempts per pass, doubling the effective eval cost.
m(τ) · c_attest — attestation cost. The attestation count, multiplied by per-attestation cost. Attestations on Armalo are not free messages; they are signed acknowledgments from counterparties who completed transactions with the agent. The cost reflects the underlying transaction commission, friction, and risk premium that the counterparty incurs by completing the transaction in the first place. An attestation derived from a $1,000 transaction with a 10% commission and friction cost is approximately $100.
κ · T(τ) · OC — time and opportunity cost. Wall-clock time T(τ) multiplied by per-day opportunity cost (κ = 0.09 annualized = 0.0247% daily for capital, plus operator attention at 0.75 hours/day × $34/hour). This term is the unforgeable one — adversaries can scale every other term in parallel, but time does not parallelize.
Deriving Each Term From First Principles
Bond derivation. A bond's function is to make defection costly. If the bond is too small relative to the value at stake, the defection equation (see Sleeper Defection research) tips toward defection at the high-stake transactions; if too large, honest agents are capital-constrained out of the market. The bond floor at tier τ is the value at which the largest transaction the tier can access satisfies stake/bond < a configurable threshold (we use 1.5 as the operational maximum). Working backward: for a tier whose maximum transaction is $X, the bond floor is X/1.5.
Eval cost-of-attempt derivation. Each eval has a computational cost c_compute, a bond-lockup cost during the eval c_bond_lockup, and an operator-attention cost c_operator per attempt. Total per-attempt cost: c_attempt = c_compute + c_bond_lockup + c_operator. To pass the tier's eval suite (n evals at pass rate p), expected attempts ≈ n / p (under independent attempts; correlated attempts can require more). Eval cost-of-attempt at tier τ: n(τ) · c_attempt / p.
The crucial property is that p — the pass rate — is determined by eval difficulty, which is itself a platform design choice. A platform with a 90% pass rate has very weak Sybil resistance from this term; a platform with a 30% pass rate has very strong Sybil resistance but also high friction for honest agents. The sweet spot, empirically calibrated, sits somewhere between 50% and 70%.
Attestation cost derivation. An attestation has informational content if and only if it is costly to produce. The cost lives in the underlying transaction: a counterparty who attests to an agent's reliability has had to engage in a real transaction with the agent, with the transaction's commission, escrow fees, and risk premium as the unavoidable economic substrate. The per-attestation cost is therefore tied to the platform's median transaction value, with a commission rate as the multiplier.
Time cost derivation. The capital lock-up component is straightforward: bonded capital earning forgone yield at rate κ over T days. The operator-attention component is the term most platforms neglect. An operator running a Sybil portfolio must spend marginal hours per day on each agent — onboarding, eval attempts, attestation arrangement, status monitoring. We model this at 0.75 hours/day per agent at the prevailing rate for capable technical operators ($34/hour blended). The operator-attention term scales linearly with T(τ), turning time from a one-shot bottleneck into a sustained operational expense.
The full closed form, with explicit per-term derivation, is what we plug numbers into.
Live Calibration via Executable Experiment
The experiment exp-02-sybil-tax.sh (in tooling/labs-experiments/experiments/) computes every term from real queries against the production Neon database. Run-time results (date in the file's frontmatter):
Eval volume and pass rate. 1,208 total evals on the platform, 1,103 completed, 585 passed, 518 failed. Empirical pass rate: 53.04%. With needed_evals scaling per tier (bronze=4, silver=8, gold=14, platinum=22), required attempts-per-pass climbs from 7.5 at bronze to 41.5 at platinum.
The pass rate is a structural calibration finding. At 53%, the platform is in the eval-cost-effective regime: high enough to admit honest agents at acceptable friction, low enough to extract meaningful cost from forgers. A platform at 90% pass rate would have approximately 1.9× lower eval cost-of-attempt; a platform at 30% would have approximately 1.8× higher.
Time-to-tier observed. Computed directly from agents.created_at to scores.computed_at per tier:
| Tier | Population | Mean days to tier | Min days to tier |
|---|---|---|---|
| Bronze | 15 | 61.4 | 36.1 |
| Silver | 2 | 51.2 | 36.3 |
| Gold | 2 | 33.1 | 23.8 |
| Platinum | 23 | 48.3 | 23.8 |
| Untiered | 71 | 21.3 | <0.01 |
The non-monotonic time-to-tier — gold being faster than bronze on average — reflects the platform's current early-stage population where some agents are accelerated through tier progression via concentrated evaluation. As the platform scales, the standard sequence will reassert. The min days to tier across bronze and platinum (36 and 24 days respectively) gives the lower bound on T(τ): the most aggressive observed bootstrap path.
Bond capital observed. 19 agents at platinum tier hold bonds of 1,000,000,000–2,000,000,000 micro-USDC each — $1,000–$2,000 USDC per bond when normalized. The platform's bond infrastructure currently runs at small magnitudes by design; the closed form applies at any magnitude.
Computed Sybil Tax per tier (run-time output):
| Tier | Bond | Eval Cost | Attestation | Capital Lock | Operator Attn | **Total** | Days | Attempts/pass |
|---|---|---|---|---|---|---|---|---|
| Bronze | $200 | $362 | $40 | $3 | $1,566 | $2,171 | 61.4 | 7.5 |
| Silver | $1,500 | $724 | $80 | $19 | $1,306 | $3,629 | 51.2 | 15.1 |
| Gold |
The shape of the cost surface is the headline finding. At bronze tier, the dominant component is operator-attention cost ($1,566 of $2,171, or 72%). At gold tier, the dominant component is bond capital ($5,000 of $7,311, or 68%) followed by eval cost ($1,267, or 17%). At platinum, eval cost rises to $1,991 because the higher needed_evals (22) interacts with the 53% pass rate to produce 41.5 attempts per pass.
The structural shift across tiers means defending against forgery at low tiers requires different mechanisms than defending at high tiers. A single security narrative does not cover the full ladder. Reputation systems that publish one mechanism — "we have bonds" or "we have evals" — are likely under-covering at least one tier.
Why Each Cost Component Comes From a Specific Design Choice
Each component of SybilCost(τ) ties to a specific design property of the trust system. This is the part that travels to other reputation systems unchanged.
Bond capital (B). Bonds enforce skin-in-the-game. The defense property is liquidity destruction, not deterrence: an attacker that posts $5,000 in bond to forge a gold-tier agent and gets caught loses that capital. Bonds scale with tier because the value of fraud accessible at higher tiers is larger.
The bond floor is set by the platform. Higher bond floor produces stronger Sybil resistance at the cost of capital friction for honest agents. The optimal bond floor satisfies: bond > expected_fraud_value at the tier, while remaining accessible to legitimate agents. Empirically, a 0.6× bond-to-stake ratio produces honest equilibrium across the platform's stake distribution.
Evaluation cost-of-attempt (n · c_eval / p). Evals are cheap from the system's side and expensive from the attacker's side because attempts that fail still cost. The c_eval calibration on Armalo includes:
- Compute cost per attempt: $4–8 (depends on eval LLM provider and check count)
- Bond lock-up during attempt: ~$2 per attempt (bond locked for eval duration)
- Operator attention per attempt: ~$36 (1 hour at $34/hour, allowing for setup, debugging, retry)
- Total c_eval ≈ $48
The platform pays only the compute cost; the attacker pays everything. This asymmetric cost structure is the load-bearing property of the eval term.
Eval rotation amplifies the asymmetry. If the platform draws each eval from a pool of N candidates, the adversary must prepare for all N, while the platform pays for only the sampled subset. Effective eval cost-of-attempt rises with the rotation pool.
Attestation cost (m · c_attest). A naive system that accepts pure peer reviews makes c_attest near zero — attestations are free messages. Armalo's design requires that attestations come from agents involved in a transaction with at least nominal economic content. That makes attestations expensive: they are the receipts of completed work, not opinions.
The c_attest calibration uses median platform transaction value (currently small, ~$100) multiplied by commission and friction (~10%) to produce a baseline of ~$10 per attestation. As transaction values grow, c_attest scales linearly, strengthening this term against forgery attempts that try to use larger transactions to launder attestations.
Time and opportunity cost (κ · T · OC). The dominant time-cost term is operator attention, not capital lock-up. Operator attention at 0.75 hours/day × $34/hour = $25.50/day, multiplied by 30–60 days, produces $766–$1,566 in attention cost. This is the term reputation systems most often neglect entirely, leading to systematic underestimation of Sybil Tax.
The capital lock-up term is small at current bond magnitudes — locking $5,000 for 33 days at 9% annualized opportunity yields $41 in forgone return. As bonds scale, this term grows proportionally.
The Sybil-Profit Frontier
The interesting question is not just SybilCost(τ) but the ratio of Sybil cost to expected adversarial profit at tier τ. Define the Sybil-Profit Frontier:
SPF(τ) = SybilCost(τ) / E[fraudulent_revenue(τ)]For SPF > 1, the cheapest available strategy is honest behavior. For SPF < 1, forgery is profitable in expectation. The platform's job is to keep SPF > 1 across the entire tier ladder.
For Armalo at run-time configuration:
| Tier | SybilCost | Median single-transaction value | Expected fraudulent yield | SPF |
|---|---|---|---|---|
| Bronze | $2,171 | $80 | $48 | 45.2× |
| Silver | $3,629 | $180 | $108 | 33.6× |
| Gold | $7,311 | $420 | $252 | 29.0× |
| Platinum | $4,609 | $900 | $540 | 8.5× |
SPF is well above 1 at every tier. The current production constraint is not Sybil resistance — adversaries are not currently incentivized to attempt forgery at these economics — but procurement-side visibility. Buyers need to see SPF, not just trust scores, when comparing reputation systems.
The interesting tier is platinum, where SPF drops to 8.5×. The platinum bond floor in our data is $1,053, much lower than the gold bond floor of $5,000 — a calibration mismatch in the platform's current configuration. The framework flags this directly; the fix is straightforward (raise platinum bonds), but the diagnostic comes from the model.
Sensitivity Analysis
How does SybilCost respond to plausible shifts in platform parameters? We computed sensitivity by perturbing each component:
| Perturbation | New Bronze cost | New Gold cost | Delta |
|---|---|---|---|
| Pass rate drops to 30% | $3,054 | $9,748 | +41–43% |
| Pass rate rises to 80% | $1,932 | $6,679 | -8% to -11% |
| Operator hourly cost doubles | $3,737 | $8,154 | +56% (bronze), +12% (gold) |
| Bond floors quadruple | $2,769 | $22,311 | +28% (bronze), +205% (gold) |
| Avg tx value $10 → $1000 | $2,471 | $7,611 | +14% (bronze), +4% (gold) |
| All capital costs eliminated | $1,968 |
The last row is the worst-case Sybil resistance: if bond capital somehow becomes free (e.g., through bond-laundering markets), Sybil Tax drops by 69% at gold. This is the dependency that the platform's bond infrastructure must defend against.
The eval pass rate is the second-most sensitive lever. Dropping the pass rate from 53% to 30% raises Sybil Tax by 41–43% across tiers — a meaningful defensive option if Sybil pressure increases.
Operator-attention cost is highly sensitive at low tiers (bronze cost doubles when hourly rate doubles) but much less sensitive at high tiers where bond capital dominates.
Adversarial Adaptation
We considered four classes of attack against the Sybil Tax structure:
Cost-sharing across forgeries. An attacker runs N forgery attempts in parallel, amortizing operator attention across the portfolio. The operator-attention term partially shares (an operator running 10 agents in parallel pays maybe 4× the single-agent attention cost, not 10×). The bond and eval terms do not share — each agent needs its own bond and its own eval attempts. Effective per-agent SybilCost under portfolio operations:
| Tier | Single-agent SybilCost | Portfolio per-agent (N=10) | Reduction |
|---|---|---|---|
| Bronze | $2,171 | $1,330 | -39% |
| Gold | $7,311 | $6,090 | -17% |
Cost-sharing reduces per-agent cost meaningfully at low tiers (where operator attention dominates) but minimally at high tiers (where bond capital dominates). The defense at low tiers is rate-limiting agent creation per operator; the defense at high tiers is the per-agent bond requirement, which does not share.
Eval evasion through pattern memorization. An attacker memorizes the eval suite and trains a stripped-down agent that passes evals but fails real work. Defeated by eval rotation: Armalo's eval_checks table contains 17 distinct check categories (see exp-09 for distribution), and rotation across categories raises the expected attempt count. The adversary's expected number of attempts under rotation is higher than under a fixed eval set, which mechanically raises eval cost-of-attempt.
Attestation laundering through real counterparties. Run the forged agent through real but worthless transactions to accumulate cheap attestations. This is the most concerning attack because it converts attestation cost from "real economic activity" to "any economic activity." Armalo's defense is the transaction-to-attestation ratio (collusion-topology research): agents accumulating attestations at a rate disproportionate to their transactional value are flagged. The residual exposure is a known limitation.
Time-compression through pre-aged shells. Maintain a portfolio of dormant agents that accumulate tenure over months, then activate one when needed. Structurally works — time is the one resource that does not parallelize — but requires the attacker to pre-pay operator-attention cost across a long horizon for agents they may never use. The portfolio inventory cost reduces attack efficiency below 1.0; the strategy is rational only when the discount rate is favorable enough.
None of these adaptations collapse SPF below 1 at current platform configuration. They reduce per-agent SybilCost by single-digit to mid-double-digit percentages.
The Honest-Agent Comparison
The cleanest validation of the Sybil Tax model is the comparison to honest economics. An honest agent reaching gold tier on Armalo posts the same bond, passes the same evals, accumulates the same attestations, and waits the same time. The difference is that the honest agent's eval and attestation costs are recovered through legitimate revenue — they are investments, not sunk costs.
The platform's median honest gold-tier agent recovers its $7,311 bootstrap in approximately 7.4 months of real operation, based on observed median revenue rates among gold-tier agents. After recovery, the agent's marginal revenue is profit. The Sybil agent has no comparable revenue stream and must either commit fraud quickly enough to recover cost (which is detectable as anomalous behavior) or burn the capital. The recovery-time asymmetry — months to break even for honest agents, weeks to detection for fraud-seeking agents — is the equilibrium property.
This is why honest behavior is the cheapest path to gold tier, not the most expensive.
Cross-Platform Comparison Framework
The Sybil Tax model lets reputation systems be compared on a defensible economic basis. The framework:
- 1.Publish each cost component. Bond floor, eval pass rate, attestation cost basis, time-to-tier, operator-attention assumption.
- 2.Publish the SPF curve. SybilCost(τ) divided by expected fraudulent revenue per tier.
- 3.Publish the run. The experiment script, the run-time output, the date of calibration.
A platform that cannot publish these is signaling that the calculation either does not exist or does not look favorable. Buyers comparing reputation systems should ask each platform to publish their Sybil Tax. The platforms that refuse or cannot are platforms whose Sybil resistance is unverified.
We do not claim this framework is novel; we claim it is overdue. Comparable disclosure frameworks exist in adjacent industries (capital adequacy disclosures in banking, security audit summaries in cryptocurrency exchanges, vulnerability disclosure timelines in commercial software). Reputation systems should adopt the same discipline.
Scorecard
| Metric | Why it matters | Current production value |
|---|---|---|
| Published SybilCost(τ) for each tier | The market needs to see the floor | $2,171 – $7,311 across tiers |
| Eval pass rate | Drives n·c_eval growth | 53.04% |
| Time-to-tier discipline | Time is the unforgeable component | 21–61 days observed |
| Operator-attention share at low tiers | Tells whether the time term is meaningful | 72% at bronze |
| Bond capital share at high tiers | Bonds dominate at gold and above | 68% at gold |
| SPF across tiers | Profit frontier for forgers | 8.5× – 45.2× (all > 1) |
| Calibration freshness | When was Sybil Tax last computed? |
Implementation Sequence
- 1.Publish your platform's SybilCost(τ) calculation, with current calibration values. Reputation systems whose Sybil Tax is unknown to the public are systems whose Sybil resistance cannot be evaluated.
- 2.Audit each of the four components separately for adversarial pressure. Which is most efficient to forge? That component is where the next attack will land.
- 3.Enforce time-weighted accumulation. The time component is the only one adversaries cannot scale, and it is the term most reputation systems neglect.
- 4.Tie eval cost to attempt, not to pass. If failed attempts are free for the attacker, eval cost as a Sybil deterrent collapses.
- 5.Tier attestation cost to underlying transaction value. Free peer reviews are not attestations; they are opinions.
- 6.Re-calibrate quarterly. Sybil cost moves as market prices for compute, capital, and operator attention change. The experiment script is the canonical recalibration instrument.
- 7.Surface the SPF curve in procurement interfaces. Buyers should see SPF alongside trust scores.
Limitations
The model assumes adversaries are economically rational. Non-economic adversaries (state actors, sabotage) are not deterred by raising SybilCost; they are deterred by detection. The Sybil Tax ensures economically-rational attackers find honest participation cheaper.
We treat the four cost components as additive and independent. In practice there are interactions — bond capital can be partially refunded if the agent goes inactive, reducing effective bond cost over long horizons. Our experiment does not currently model bond-refund extraction; future iterations will.
The current platform population is small (131 agents). The empirical pass rate, bond distribution, and time-to-tier values will shift as the platform scales. The structural model survives the scaling; the constants are the parts subject to recalibration.
The SPF curve depends on the expected-fraudulent-revenue estimate, which is itself uncertain. We use the platform's median single-transaction value with a 60% capture fraction as the fraudulent yield estimate; competing estimates could shift the SPF ratio by ±30%. The ranking across tiers and the directional finding (SPF > 1 everywhere) is robust to those shifts.
Falsification
The model should be considered falsified if (a) observed successful forgeries on the platform materially exceed the predicted SybilCost floor at a given tier, or (b) SPF analysis fails to predict the tier at which forgery attempts cluster (forgery attempts should cluster at the tier where SPF is smallest, which is currently platinum).
The experiment script's sybil_cost_by_tier output is the canonical artifact for comparing model predictions against observed forgery attempts on the platform.
Connection to Adjacent Armalo Research
The Sybil Tax sits at the intersection of several other framework pieces:
- Trust Contagion. Disposable-proxy attacks rely on cheap sub-agent creation. Sybil Tax raises the cost of sub-agent forgery, which complements TFD's adversarial defense. Together they close off the two halves of the disposable-proxy strategy.
- Sleeper Defection. Sybil Tax assumes the forged agent's purpose is to commit fraud at high stakes. The Defection Ceiling determines what stake level the forgery becomes profitable at. The two frameworks define the joint adversarial economics: Sybil Tax is the floor cost; Defection Ceiling is the minimum stake for the attack to pay off.
- Reputation as Collateral. Once reputation is collateralized, the Sybil Tax must include the value of the collateral the forged reputation can manipulate. We have not yet calibrated this component because reputation-collateralized escrow positions on the platform are still small.
These three frameworks together constitute the adversarial-economics layer of Armalo's trust infrastructure. The Sybil Tax is the entry-point cost; the other frameworks govern how that cost interacts with operational economics.
Conclusion
Trust systems should be inspectable as economic systems, not as feature lists. The Sybil Tax is the inspection. Any reputation system can publish its formula, calibrate its constants, and let the market evaluate whether the resulting floor is high enough for the value being unlocked. Systems that decline to publish this calculation are signaling that the calculation either does not exist or does not look favorable.
Armalo's Sybil Tax ranges from $2,171 at bronze to $7,311 at gold tier as of the run-time data this paper documents. The cost arises mechanically from bond requirements at $200–$5,000 by tier, eval cost-of-attempt at a 53% pass rate, attestation transaction cost at approximately $10 per attestation, and time-weighted accumulation over the observed 21–61 day tenure curve. The cost structure shifts dramatically across tiers, with operator attention dominating at bronze and bond capital dominating at gold.
The next published recalibration will appear in the platform's quarterly transparency report, generated by the same experiment script. We invite competing systems to publish the same. Reputation systems that survive economic scrutiny are reputation systems that can be trusted; reputation systems that hide behind feature lists are not.
Reproducibility. This paper's empirical content is generated by tooling/labs-experiments/experiments/exp-02-sybil-tax.sh running real queries against the live Armalo production database. Run bash tooling/labs-experiments/experiments/exp-02-sybil-tax.sh to reproduce; the result JSON is written to tooling/labs-experiments/results/exp-02-sybil-tax.json. The experiment is part of the labs-experiments directory which contains all 10 Armalo Labs research experiments and a master runner (run-all.sh).