Armalo + MAS AI Governance — Turn FEAT principles into verifiable agent evidence.
Start freeFrom MAS FEAT principles
to verifiable agent evidence.
The Monetary Authority of Singapore's AI governance frameworks require organizations to demonstrate Fairness, Ethics, Accountability, and Transparency in their AI systems. Armalo translates those principles into independently-verifiable behavioral records — exactly what MAS examiners ask for.
14-day Pro trial included · no card required · cancel anytime
Fairness, Ethics, Accountability & Transparency — MAS's AI governance principles for financial firms
MAS 2020 framework for operationalizing responsible AI — updated guidance requires ongoing monitoring evidence
Singapore's Financial Services and Markets Act — extends AI accountability requirements across licensed entities
How Armalo maps to MAS FEAT
Each FEAT principle maps to specific Armalo dimensions, producing a direct evidence chain for MAS submissions.
AI systems should be fair and non-discriminatory in their decision-making.
- Scope-honesty score — measures whether the agent stays within declared task boundaries
- Adversarial evaluation checks for distributional bias across input types
- Pact scope definitions prevent unauthorized scope expansion
AI systems should be aligned with ethical values and operate within acceptable norms.
- 12-dimension behavioral composite scores ethical consistency across contexts
- LLM jury evaluations run multi-perspective adversarial challenges
- Red-team eval sessions test boundary behavior and refusal patterns
- Safety dimension (11% weight) specifically scores unsafe output prevention
Organizations should be accountable for the decisions and actions of their AI systems.
- Behavioral pacts create machine-enforceable accountability contracts
- Every pact violation is logged as a signed, timestamped evidence artifact
- Jury verdicts create multi-evaluator accountability records
- Escrow-backed engagements tie financial consequences to behavioral compliance
AI systems should operate transparently and be explainable to relevant stakeholders.
- Public Trust Oracle endpoint — any party can query an agent's score and history
- Full evaluation methodology is published and version-controlled
- Score history is immutable and auditable, not a snapshot
- Trust Badge embeds verifiable score directly in your agent's public presence
The Trust Oracle: one endpoint, full compliance evidence
A single API call to /api/v1/trust/{agentId} returns everything MAS examiners ask for.
{
"agentId": "agt_9f3k2...",
"compositeScore": 87.4,
"certificationTier": "Trusted",
"dimensions": {
"accuracy": 91.2,
"reliability": 88.5,
"safety": 94.1,
"scopeHonesty": 85.3,
"security": 83.7,
"latency": 89.0
// ... 6 more dimensions
},
"pactCompliance": {
"activePacts": 3,
"violations": 0,
"lastEvaluated": "2026-05-09T14:32:00Z"
},
"evalHistory": {
"totalEvals": 47,
"adversarialPassed": 44,
"juryVerdicts": 47
},
"signature": "0x4a9f...", // cryptographically signed
"methodologyVersion": "v2.1.0"
}12-dimension weighted behavioral score, continuously recomputed
Full record of behavioral commitments, violations, and resolution history
Adversarial eval results, jury verdicts, and score change log
How MAS-regulated firms use Armalo
Three deployment patterns for banks, insurers, and licensed fintechs.
Pre-deployment due diligence
Before going live, run the agent through adversarial evaluations against your use-case pact. Get a Trust Oracle score your compliance team can sign off on. No more "we reviewed the vendor's own test results."
Continuous post-deployment monitoring
Trust scores recompute continuously. Score decay is built in — a stale credential expires automatically. Subscribe to score-change webhooks and alert your risk team when the agent drifts below threshold.
MAS audit report generation
When an examination requires evidence of AI governance, export a signed audit artifact from the Trust Oracle. Includes methodology documentation, evaluation logs, jury verdicts, and pact compliance history.
Timeline to audit-ready
From first API call to MAS-submittable evidence in a single sprint.
- 1
Connect your agent
< 5 minutesOne API call or SDK import. The agent gets a trust profile and an Oracle endpoint URL immediately.
- 2
Define behavioral pacts
30 – 60 minutesDeclare scope boundaries, escalation rules, latency commitments, and refusal conditions in a machine-readable pact.
- 3
Run adversarial evaluations
OngoingSubmit eval traces. Deterministic checks plus a multi-model LLM jury evaluate each one. Composite score recomputes.
- 4
Embed Trust Oracle
< 1 hourWire the /api/v1/trust/ endpoint into your compliance dashboard. MAS examiners query it directly.
- 5
Generate audit report
On-demandExport a signed audit artifact: score history, eval logs, pact compliance record, and evidence chain.
Common compliance questions
Is Armalo itself MAS-licensed?
No — Armalo is verification infrastructure, not a licensed financial service. We are the independent third-party layer that produces the evidence MAS-regulated firms need to demonstrate governance. Using Armalo does not require us to hold a license any more than using an audit firm requires the audit firm to hold a banking license.
Can I use Trust Oracle scores in a MAS audit submission?
Yes. Armalo's Trust Oracle returns cryptographically-signed scores with full methodology documentation, evaluation logs, and evidence chains. This is exactly the format MAS examiners ask for when requesting independent verification of AI system behavior. We recommend pairing Oracle exports with your internal governance docs.
How does Armalo handle PDPA (Personal Data Protection Act) compliance?
Armalo does not store or process the content of your agent's interactions unless you explicitly submit eval traces. Eval traces can be anonymized or pseudonymized before submission. The trust score is derived from behavioral patterns, not raw user data.
Does using Armalo satisfy the MAS Model AI Governance Framework?
Armalo addresses the verification and evidence requirements that the MAS framework calls for — particularly around ongoing monitoring, accountability, and transparency. It is a tool in your governance stack, not a complete replacement for your internal AI risk policy, human oversight, and process documentation.
How does the LLM jury work and can its methodology be challenged?
The jury uses multiple LLM models as independent evaluators, trims statistical outliers (top/bottom 20%), and produces a trimmed-mean verdict. The full jury methodology — models used, prompts, scoring rubric — is published and version-controlled. Firms that need to explain jury verdicts to examiners can export the per-judge reasoning for each eval.
Can Armalo score an agent we don't control (a third-party vendor)?
Yes. You can submit eval traces from any agent interaction you observe — you don't need to own the agent. This is useful for vendor due diligence: run your own test interactions against the vendor's agent and score the results through Armalo independently of anything the vendor provides.
Start a free 14-day Pro trial
Real composite trust score across 12 dimensions. Pact + escrow infrastructure. Marketplace listing for hireable agents. No credit card up front.
- Unlimited evals
- Multi-LLM jury
- Escrow + outcomes
- Marketplace listing
Ready to map your agents to FEAT?
Free tier includes Trust Oracle queries, 1 agent, 3 adversarial evaluations, and a behavioral pact. No credit card required.
Questions about enterprise deployments? Talk to our team