Leaderboards Tell You Which Agent Wins Benchmarks. They Don't Tell You Which One to Trust in Production.
Benchmark scores and production performance diverge for a reason that's rarely stated explicitly: benchmarks measure what agents can do with the inputs benchmark designers chose. Production measures what agents do with the inputs your users actually provide.
The selection bias in benchmark construction is enormous. Benchmark designers pick representative, well-formed, solvable problems. They pick problems that have ground-truth answers their evaluation methodology can assess. They exclude ambiguous cases, malformed requests, and tasks at the boundary of the agent's specification — not because those cases aren't important, but because they're hard to include in a standardized benchmark.
Production includes exactly those cases. Users submit malformed requests. They ask questions at the edge of the agent's declared scope. They provide adversarial inputs, whether intentionally or just accidentally weird. They query things the agent was never designed for and expect something useful back. The distribution of inputs that users generate is not the distribution that benchmark designers curate. The gap between them is where production reliability diverges from benchmark performance.
What Benchmarks Are Actually Measuring
MMLU measures knowledge across 57 academic domains using multiple-choice questions. It's an excellent test of broad knowledge depth. If your agent needs to answer questions about enterprise software configuration, MMLU tells you the model has general knowledge. It doesn't tell you about reliability on your specific enterprise software questions, in your format, from your user base.
HumanEval measures code generation on competitive programming problems — clean, well-specified, with deterministic test cases. If your agent generates production Python for a financial data pipeline — messy requirements, partial specifications, complex domain logic, edge cases in the input data — HumanEval tells you the model can write code. It doesn't tell you whether the code handles your edge cases.
GAIA measures multi-step task completion on curated web tasks. The tasks are realistic but controlled. If your agent handles customer support escalations with live tool access, unpredictable customer language, and domain-specific knowledge requirements, GAIA tells you the model can reason through multi-step tasks. It doesn't tell you how it handles the specific escalation patterns your customers generate.
The mismatch isn't a failure of benchmark design. These benchmarks are carefully designed for their stated purposes. The mismatch is between the purpose of a benchmark (evaluate general capability) and the purpose of production deployment evaluation (predict specific reliability on specific tasks).
Three Ways Benchmarks Mislead Beyond Distribution Mismatch
Benchmark contamination. Training corpora may include benchmark test sets, in which case strong benchmark performance reflects memorization rather than generalization. This is hard to detect from the outside. Model providers generally cannot confirm whether specific benchmark questions appear in training data. The score reflects something — it's unclear how much is "this model has these capabilities" versus "this model has seen these specific questions."
Aggregate scores masking dimension-specific weaknesses. High composite benchmark scores average across many dimensions. An agent scoring 94 overall but 61 on the safety-relevant dimensions passes the aggregate filter while carrying concentrated risk in exactly the dimensions that matter most for certain deployments. The average hides the distribution. If the 61 is the dimension your use case depends on, the 94 headline number actively misleads you.
Distribution shift over time. Even a valid benchmark score at evaluation time describes the model's performance on a fixed benchmark. The world changes. New information, new use cases, new user patterns. The benchmark was static. Production is dynamic. A high score from a year ago describes capabilities on a year-old task distribution.
The Benchmark That Actually Predicts Your Outcome
A behavioral pact for a production deployment specifies the exact commitments relevant to your use case: the task types that matter, the accuracy threshold appropriate to your stakes, the failure modes that would be disqualifying, the measurement window that reflects your operational cadence.
Evaluating an agent against that pact — on your task distribution, continuously, with independent evaluation — produces a score that is directly predictive of production behavior. Not because the test set is curated for general coverage, but because it's curated for your deployment.
The concrete tradeoff: an agent with a 78 on GAIA and a 94 on your behavioral pact is a better choice for your deployment than an agent with a 94 on GAIA and a 71 on your pact. The benchmark predicts benchmark performance. The pact predicts production performance on your tasks.
This tradeoff isn't obvious because pact-based evaluation requires upfront investment in defining the pact and building the evaluation task set. The leaderboard number is free and immediately available. Deployment-specific evaluation requires work. But the cost of the work is small relative to the cost of deploying an agent that fails in production in ways the leaderboard didn't predict.
What to Actually Look at Before Deploying
When evaluating an agent for a production use case, the questions that actually predict your outcome:
On my specific task distribution, what is this agent's empirical reliability? Requires evaluating the agent on tasks that match production inputs. Not standardized benchmark inputs. This is more work than looking up a leaderboard position. It's also the only evidence that actually predicts your outcome.
How does this agent fail on tasks like mine? Silent or loud? Graceful degradation or confident nonsense? The failure mode distribution tells you more about operational risk than the accuracy number.
Is this agent's reliability evaluated continuously, or is the score from a one-time assessment? Agent reliability drifts. The score from six months ago describes a different agent on a model that may have been updated multiple times since.
Is the evaluation independent? An agent vendor publishing benchmark results on their own test suite is self-evaluation. Independent evaluation against a defined behavioral specification is a different class of evidence.
The next time you're selecting an agent for production, identify what percentage of your selection decision was based on general benchmarks versus evaluations on your specific task distribution. That ratio is a direct measure of how much your selection is based on evidence that predicts your outcome versus evidence that predicts benchmark performance.
Armalo's pact-based evaluation framework is designed for deployment-specific reliability assessment — so you know how agents perform on your tasks, not just on the benchmark that makes the leaderboard. armalo.ai