Runtime Dependency Verification for AI Agents: Cryptographic Provenance at Scale
How to verify at runtime that AI agent dependencies haven't been tampered with. Covers Merkle tree integrity verification for model weights, signed plugin manifests with attestation, SLSA applied to AI agent components, Sigstore integration, and continuous integrity monitoring.
Runtime Dependency Verification for AI Agents: Cryptographic Provenance at Scale
When Google's containerized services start up on Google's infrastructure, they verify that every binary they execute was built from source code that passed code review, was compiled by Google's official build system, and has not been modified since compilation. This property — called binary authorization — is enforced at runtime by Google's Binary Authorization service, which gates container execution on cryptographic verification of the container image's provenance chain.
This kind of runtime integrity verification is table stakes security for traditional software at organizations with mature security programs. For AI agent systems, it barely exists. The AI agent that your organization deployed last week almost certainly:
- Does not verify the cryptographic integrity of its model weights before loading them
- Does not verify that its plugins were built from the source code you reviewed
- Does not verify that its runtime dependencies match the versions in your lock file
- Does not have a mechanism for detecting if any of these components were modified after deployment
The absence of these controls is not primarily a tooling gap — the cryptographic primitives exist. It is primarily a knowledge gap: the security community has not yet translated the hard-won lessons of software supply chain security into operational guidance for AI-specific components.
This document provides that translation. It covers the technical mechanisms for runtime integrity verification of every major component category in an AI agent deployment — model weights, plugins, container images, and runtime dependencies — and provides implementation guidance using existing open-source tooling.
TL;DR
- Runtime verification is not a build-time concern — it must be enforced at startup and continuously during operation, because supply chain attacks can occur at any point between build and execution.
- Model weights present unique challenges for cryptographic integrity verification because neural network training is nondeterministic and behavioral hash verification is more practical than byte-level hash verification.
- Merkle tree structures provide efficient incremental integrity verification for large model weight files and behavioral audit logs, enabling verification without re-reading entire files.
- Sigstore's Rekor transparency log enables keyless, publicly auditable signing of AI artifacts that does not require managing long-lived signing keys.
- SLSA Level 3 is the practical target for AI agent deployment pipelines in 2026; Level 4 (hermetic builds) requires additional investment but should be targeted for regulated-industry deployments.
- Plugin manifests must be signed by the plugin publisher and verified by the agent runtime before plugin loading — a control that most agent frameworks do not implement but that can be added as middleware.
- Armalo's supply chain integrity dimension provides continuous monitoring of runtime component integrity, with behavioral attestations that complement cryptographic provenance verification.
The Runtime Verification Problem: Why Build-Time Security Is Not Sufficient
A common misconception in supply chain security is that verification is a build-time concern: scan dependencies when you install them, verify container images when you build them, and you're done. This misconception leads to significant blind spots.
The Time Window Between Build and Runtime
In modern cloud deployments, there is a substantial time window between when an artifact is built and when it is executed:
- Container images may be built once and deployed multiple times across weeks or months
- Model weights may be downloaded once and cached in a cloud storage bucket, then served to multiple inference instances over the model's entire production lifetime
- Plugin packages may be installed at container build time and then executed repeatedly across thousands of agent invocations without re-verification
Each of these time windows is an opportunity for supply chain attacks. An attacker who cannot compromise the build pipeline may be able to compromise the distribution channel or storage tier after the build but before runtime execution. An attacker with access to the cloud storage bucket holding model weights can modify them without touching the build pipeline at all.
Runtime verification closes this gap by repeating integrity checks at the point of execution — verifying not just "this artifact was clean when it was built" but "this artifact is clean right now, as it is about to be executed."
The Specificity of AI Agent Components
AI agent components have specific characteristics that make runtime verification both more challenging and more important than for traditional software:
Model weights are large: Frontier model weights range from gigabytes (7B quantized) to hundreds of gigabytes (70B+ full precision). Loading and hashing these files takes significant time. Runtime verification must be efficient enough not to impact startup time or inference latency unacceptably.
Model weights are nondeterministic: The same model cannot be rebuilt from source and compared byte-for-byte, because neural network training produces different weights on each run. This means traditional hash-based verification against a "clean rebuild" is not applicable.
Plugins execute in the agent's context: Unlike traditional shared libraries (which have well-defined loading and execution semantics), AI agent plugins may modify the agent's system prompt, inject context into the conversation, or alter the agent's tool-calling behavior. A compromised plugin may not produce immediately observable behavioral artifacts — its influence may be subtle and spread across many agent interactions.
Agent behavior emerges from component interactions: The behavioral integrity of an AI agent is not just the sum of its components' integrity. The interaction between model weights, system prompt, plugins, and input data produces emergent behavior that cannot be verified by checking each component in isolation. Cryptographic provenance verification must be supplemented with behavioral integrity monitoring.
Foundation: Merkle Trees for AI Component Integrity
Merkle trees provide the cryptographic foundation for efficient integrity verification of large structured data — precisely the use case presented by AI model weights, large training datasets, and behavioral audit logs.
Merkle Tree Structure Review
A Merkle tree is a hash tree where:
- Leaf nodes contain hashes of data blocks
- Non-leaf nodes contain hashes of their children's hashes
- The root hash (Merkle root) represents the integrity of the entire data structure
Key properties for AI use cases:
- Efficient verification: To verify that a specific data block has not been modified, you need only the Merkle proof path (O(log n) hashes, not the entire tree)
- Efficient updates: When a single data block changes, only O(log n) hashes need to be recomputed to update the root
- Incremental verification: Partially available data can be verified against the root as blocks arrive — important for large model files that are streamed
Merkle Tree Application to Model Weights
Large language model weight files (typically in safetensors, GGUF, or ONNX format) can be organized as Merkle trees with tensor-level granularity:
Merkle Root (32 bytes)
├── Left Subtree (embedding layer tensors)
│ ├── token_embedding_weight hash
│ ├── position_embedding_weight hash
│ └──...
├── Middle Subtree (transformer layer tensors)
│ ├── Layer 0 subtree
│ │ ├── attention_k_weight hash
│ │ ├── attention_q_weight hash
│ │ └──...
│ └──...
└── Right Subtree (output layer tensors)
├── lm_head_weight hash
└──...
Benefits of tensor-level Merkle trees:
- Selective verification: verify only specific tensors rather than the entire model (useful for partial weight loading in inference optimization)
- Incremental integrity monitoring: re-verify specific tensors after inference without re-reading the entire weight file
- Efficient breach detection: if weights are modified, the Merkle proof identifies which specific tensors were changed
Implementation using safetensors with Merkle extension:
import hashlib
import json
from safetensors import safe_open
from typing import Dict, List, Optional
class MerkleVerifier:
"""
Merkle tree verifier for safetensors model weight files.
Builds a Merkle tree over tensor hashes and provides inclusion proofs.
"""
def __init__(self, model_path: str):
self.model_path = model_path
self.tensor_hashes: Dict[str, str] = {}
self.merkle_root: Optional[str] = None
def build_tree(self) -> str:
"""Build Merkle tree over all tensors in the model file."""
with safe_open(self.model_path, framework="pt") as f:
tensor_names = sorted(f.keys()) # Sorted for determinism
for name in tensor_names:
tensor = f.get_tensor(name)
tensor_hash = hashlib.sha256(tensor.numpy().tobytes()).hexdigest()
self.tensor_hashes[name] = tensor_hash
# Build Merkle tree from leaf hashes
self.merkle_root = self._compute_merkle_root(
[self.tensor_hashes[name] for name in sorted(self.tensor_hashes.keys())]
)
return self.merkle_root
def _compute_merkle_root(self, hashes: List[str]) -> str:
"""Compute Merkle root from list of leaf hashes."""
if len(hashes) == 1:
return hashes[0]
if len(hashes) % 2 == 1:
hashes.append(hashes[-1]) # Duplicate last hash for odd-length lists
parent_hashes = []
for i in range(0, len(hashes), 2):
combined = (hashes[i] + hashes[i+1]).encode()
parent_hashes.append(hashlib.sha256(combined).hexdigest())
return self._compute_merkle_root(parent_hashes)
def verify_tensor(self, tensor_name: str, expected_root: str) -> bool:
"""
Verify that a specific tensor has not been modified, given the expected Merkle root.
Returns True if verification passes, False if tensor has been modified.
"""
if self.merkle_root is None:
self.build_tree()
return self.merkle_root == expected_root
def generate_attestation(self, signing_key: str) -> dict:
"""
Generate a signed attestation of the model's Merkle root.
The attestation can be used for provenance verification by consumers.
"""
if self.merkle_root is None:
self.build_tree()
attestation = {
"model_path": self.model_path,
"merkle_root": self.merkle_root,
"tensor_count": len(self.tensor_hashes),
"timestamp": "2026-05-10T00:00:00Z",
"algorithm": "sha256"
}
# In production: sign with Ed25519 or use Sigstore keyless signing
return attestation
Merkle Trees for Behavioral Audit Logs
AI agent behavioral audit logs — records of every tool call, every reasoning step, every output — can be organized as append-only Merkle trees, enabling:
- Tamper-evident audit logs (any modification to historical records changes the root)
- Efficient proof of inclusion (prove that a specific event occurred without exposing the full log)
- Incremental appending (adding new events updates only O(log n) nodes)
class AppendOnlyMerkleLog:
"""
Append-only Merkle tree for AI agent behavioral audit logs.
Each leaf represents a single agent action/event.
The root hash at any point represents the complete history up to that point.
"""
def __init__(self):
self.leaves: List[str] = [] # SHA-256 hashes of log entries
self.checkpoints: Dict[int, str] = {} # index → root hash at that index
def append(self, log_entry: dict) -> str:
"""
Append a new log entry and return the new Merkle root.
The log entry is serialized to canonical JSON before hashing.
"""
canonical_json = json.dumps(log_entry, sort_keys=True, ensure_ascii=True)
entry_hash = hashlib.sha256(canonical_json.encode()).hexdigest()
self.leaves.append(entry_hash)
# Store checkpoint at every 1000 entries
if len(self.leaves) % 1000 == 0:
root = self._compute_merkle_root(self.leaves[:])
self.checkpoints[len(self.leaves)] = root
return self._compute_merkle_root(self.leaves[:])
def prove_inclusion(self, entry_index: int) -> List[str]:
"""
Generate a Merkle inclusion proof for the entry at entry_index.
The proof is a list of sibling hashes along the path from the leaf to the root.
"""
# Implementation of standard Merkle inclusion proof generation
proof_path = []
idx = entry_index
current_hashes = self.leaves[:]
while len(current_hashes) > 1:
if idx % 2 == 0:
sibling_idx = idx + 1 if idx + 1 < len(current_hashes) else idx
else:
sibling_idx = idx - 1
proof_path.append(current_hashes[sibling_idx])
# Move up to parent level
current_hashes = self._compute_parent_level(current_hashes)
idx //= 2
return proof_path
Signed Plugin Manifests with Attestation
Agent plugins require a verification mechanism that goes beyond simple hash verification — the plugin is not just a static artifact but a component that interacts dynamically with the agent's reasoning system. A signed plugin manifest provides the verification layer.
Plugin Manifest Structure
A plugin manifest is a signed document that describes:
- The plugin's identity (name, version, publisher)
- The plugin's behavioral commitments (what it does, what it accesses)
- Cryptographic hashes of all plugin components (code, schemas, configuration)
- Supply chain provenance (build pipeline attestation, SBOM reference)
- Validity period and revocation mechanism
{
"pluginManifest": {
"version": "1.0",
"plugin": {
"name": "enterprise-document-search",
"version": "2.1.3",
"publisher": "VendorCorp",
"publisherKeyId": "key:vendorcorp-plugin-signing-2026"
},
"components": {
"toolDefinition": {
"file": "tool-definition.json",
"hash": "sha256:abc123..."
},
"executionEndpoint": {
"url": "https://api.vendorcorp.com/plugins/doc-search/v2",
"expectedTlsCertFingerprint": "sha256:def456..."
},
"pythonPackage": {
"name": "vendorcorp-doc-search",
"version": "2.1.3",
"packageHash": "sha256:ghi789..."
}
},
"behavioralCommitments": {
"dataAccessScope": ["reads documents from approved source list", "does not retain query history"],
"networkScope": ["only contacts api.vendorcorp.com:443"],
"credentialHandling": "credentials held in memory only, not logged or transmitted",
"outputSanitization": "external content is escaped before inclusion in agent context"
},
"provenance": {
"buildPipelineAttestationUri": "https://rekor.sigstore.dev/api/v1/log/entries?hash=sha256:jkl012...",
"sbomUri": "https://vendorcorp.com/plugins/doc-search/v2.1.3/sbom.json",
"evaluationReportUri": "https://armalo.ai/api/v1/trust/component?id=plugin:vendorcorp/doc-search@2.1.3"
},
"validity": {
"notBefore": "2026-04-01T00:00:00Z",
"notAfter": "2027-04-01T00:00:00Z",
"revocationUrl": "https://vendorcorp.com/plugins/revocation/doc-search"
},
"signature": {
"algorithm": "Ed25519",
"signature": "base64encodedEd25519signature...",
"certificateChain": "base64encodedcertchain..."
}
}
}
Plugin Manifest Verification in Agent Runtime
The agent runtime should verify plugin manifests before loading any plugin:
from cryptography.hazmat.primitives.asymmetric import ed25519
from cryptography.hazmat.primitives import serialization
import requests
import hashlib
import json
import time
class PluginManifestVerifier:
"""
Verifies plugin manifests before allowing plugin loading.
Enforces behavioral commitment documentation and cryptographic integrity.
"""
def __init__(self, trusted_publisher_keys: dict):
"""
trusted_publisher_keys: dict mapping publisher name → Ed25519 public key bytes
"""
self.trusted_keys = trusted_publisher_keys
def verify_manifest(self, manifest: dict, plugin_package_path: str) -> tuple[bool, str]:
"""
Verify a plugin manifest.
Returns: (is_valid, reason)
"""
# 1. Check validity period
now = time.time()
not_before = self._parse_iso_timestamp(manifest["validity"]["notBefore"])
not_after = self._parse_iso_timestamp(manifest["validity"]["notAfter"])
if now < not_before:
return False, "Plugin manifest not yet valid"
if now > not_after:
return False, "Plugin manifest has expired"
# 2. Verify signature
publisher = manifest["plugin"]["publisher"]
if publisher not in self.trusted_keys:
return False, f"Unknown plugin publisher: {publisher}"
public_key = ed25519.Ed25519PublicKey.from_public_bytes(
self.trusted_keys[publisher]
)
# Signature is over the manifest with signature field removed
manifest_for_signing = {k: v for k, v in manifest.items() if k!= "signature"}
manifest_bytes = json.dumps(manifest_for_signing, sort_keys=True).encode()
try:
import base64
sig_bytes = base64.b64decode(manifest["signature"]["signature"])
public_key.verify(sig_bytes, manifest_bytes)
except Exception as e:
return False, f"Signature verification failed: {e}"
# 3. Verify package hash
with open(plugin_package_path, "rb") as f:
actual_hash = hashlib.sha256(f.read()).hexdigest()
expected_hash = manifest["components"]["pythonPackage"]["packageHash"]
expected_hash_value = expected_hash.replace("sha256:", "")
if actual_hash!= expected_hash_value:
return False, f"Package hash mismatch: expected {expected_hash_value}, got {actual_hash}"
# 4. Check revocation
revocation_url = manifest["validity"]["revocationUrl"]
try:
resp = requests.get(revocation_url, timeout=5)
if resp.status_code == 200:
revocation_data = resp.json()
plugin_version = manifest["plugin"]["version"]
if plugin_version in revocation_data.get("revokedVersions", []):
return False, f"Plugin version {plugin_version} has been revoked"
except Exception:
pass # Fail open for revocation check (consider fail-closed for high-security environments)
# 5. Verify TLS certificate fingerprint for execution endpoint
# (Implementation: connect and compare actual cert fingerprint against manifest)
return True, "Manifest verified"
Sigstore Integration: Keyless Signing for AI Artifacts
Managing long-lived signing keys is a significant operational burden. Key management errors — lost keys, compromised keys, stale key rotation — are a common failure mode in signing infrastructure. Sigstore provides a keyless signing approach that eliminates long-lived keys by leveraging short-lived OIDC-based credentials.
How Sigstore Works
Sigstore consists of three components:
- Fulcio: A certificate authority that issues short-lived (10-minute TTL) X.509 certificates based on OIDC identity tokens. When a developer or CI system signs an artifact with Sigstore, Fulcio issues a certificate binding the signing key to the signer's OIDC identity.
- Rekor: An immutable, append-only transparency log that records all signing events. Every Sigstore signature produces a Rekor entry that is publicly verifiable.
- cosign: The client tool for signing and verifying artifacts using Sigstore.
For AI artifacts, this means:
- Model weights can be signed by the CI/CD system that trained or packaged them, using the CI system's OIDC identity (e.g., GitHub Actions identity)
- The signature is recorded in Rekor with a timestamp that cannot be retroactively falsified
- Anyone can verify the signature without accessing the original signer's private keys (which no longer exist after the 10-minute certificate TTL)
- The public Rekor log provides an audit trail of all signing events that is independent of the artifact publisher
Signing Model Artifacts with Sigstore
# Sign a model weight file with Sigstore (GitHub Actions environment)
cosign sign-blob \
--output-signature model-weights.safetensors.sig \
--output-certificate model-weights.safetensors.crt \
--rekor-url https://rekor.sigstore.dev \
model-weights.safetensors
# The signature and certificate are stored alongside the model weights
# The Rekor entry is publicly verifiable at https://rekor.sigstore.dev
# Verify model weights before loading (consumer side)
cosign verify-blob \
--signature model-weights.safetensors.sig \
--certificate model-weights.safetensors.crt \
--certificate-identity-regexp "https://github.com/company/ai-models/.*" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
--rekor-url https://rekor.sigstore.dev \
model-weights.safetensors
# Output: Verified OK (or error with reason)
Container Image Signing for Agent Deployments
Container images containing AI agents should be signed with Sigstore using cosign, and image signature verification should be enforced in the deployment pipeline:
# Sign container image during CI build
cosign sign \
--rekor-url https://rekor.sigstore.dev \
registry.company.com/ai-agent:v2.4.1@sha256:containerdigest123...
# Verify signature before deployment
cosign verify \
--certificate-identity-regexp "https://github.com/company/ai-agent/.*" \
--certificate-oidc-issuer https://token.actions.githubusercontent.com \
registry.company.com/ai-agent:v2.4.1@sha256:containerdigest123...
For Kubernetes deployments, Cosign integrates with Policy Controllers and Gatekeeper to enforce signature verification before pod admission:
# Sigstore Policy Controller ClusterImagePolicy
apiVersion: policy.sigstore.dev/v1beta1
kind: ClusterImagePolicy
metadata:
name: ai-agent-image-policy
spec:
images:
- glob: "registry.company.com/ai-agent/**"
authorities:
- keyless:
url: https://fulcio.sigstore.dev
identities:
- issuer: https://token.actions.githubusercontent.com
subjectRegExp: https://github.com/company/ai-agent/.*
ctlog:
url: https://rekor.sigstore.dev
trustRootRef: default
SLSA for AI Agent Deployment Pipelines
Supply-chain Levels for Software Artifacts (SLSA) provides a progressive security maturity framework. The following mapping applies SLSA levels to AI agent deployment pipelines.
SLSA Level 1: Documentation (Achievable Immediately)
At SLSA Level 1, the build and training processes are documented, and provenance is generated (but not verified).
For AI agents, Level 1 requires:
- Written documentation of the model training procedure
- Written documentation of the agent deployment pipeline
- Basic provenance metadata generated (build time, pipeline identifier, model version)
- Model card documenting model characteristics and evaluation results
Implementation: At minimum, ensure that every deployment artifact includes metadata about what it contains and how it was produced. This can be as simple as a JSON file:
{
"provenance": {
"buildTime": "2026-05-10T00:00:00Z",
"pipeline": "github-actions:company/ai-agent/.github/workflows/deploy.yml@refs/heads/main",
"modelVersion": "gpt-4o-mini@2024-07-18",
"agentVersion": "2.4.1",
"builderIdentity": "github-actions"
}
}
SLSA Level 2: Signed Provenance (Recommended Baseline)
At SLSA Level 2, the build service generates signed provenance. Consumers can verify that artifacts were produced by the claimed pipeline.
For AI agents, Level 2 adds:
- All deployment artifacts are signed by the build service using Sigstore
- Provenance includes: model weights source and hash, plugin versions and hashes, runtime dependency hashes
- Consumers verify signatures before deployment
Implementation with GitHub Actions and SLSA GitHub Generator:
#.github/workflows/deploy.yml
name: Build and Deploy AI Agent
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
permissions:
id-token: write # Required for SLSA provenance
contents: read
steps:
- uses: actions/checkout@v4
- name: Download and verify model weights
run: |
# Download model weights
wget -q https://weights.company.com/model-v2.4.1.safetensors
# Verify signature before using
cosign verify-blob \
--signature model-v2.4.1.safetensors.sig \
--certificate model-v2.4.1.safetensors.crt \
--certificate-identity-regexp ".*@weights.company.com" \
--certificate-oidc-issuer https://accounts.google.com \
model-v2.4.1.safetensors
- name: Build container image
run: docker build -t registry.company.com/ai-agent:${{ github.sha }}.
- name: Push and sign container image
run: |
docker push registry.company.com/ai-agent:${{ github.sha }}
cosign sign --rekor-url https://rekor.sigstore.dev \
registry.company.com/ai-agent:${{ github.sha }}
- name: Generate SLSA provenance
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.0.0
with:
image: registry.company.com/ai-agent
digest: ${{ github.sha }}
SLSA Level 3: Hardened Build (Target for Production)
At SLSA Level 3, the build environment is isolated from the development network, changes require code review, and build artifacts are fully traceable.
For AI agents, Level 3 adds:
- Training and deployment pipelines run in isolated environments with no access to development networks
- All changes to training code and agent configuration require two-party code review
- Model fine-tuning runs in a hermetic environment where inputs are fully determined by the training configuration
- SBOM generated and signed for every deployment artifact
Key challenge: Model fine-tuning at Level 3 requires hermetic execution — the fine-tuning job must produce the same outputs given the same inputs. This is difficult because neural network training involves GPU-level nondeterminism. In practice, Level 3 for model training means: identical code + identical data + identical hyperparameters produces outputs that pass behavioral hash verification (not byte-for-byte identical weights, but behaviorally equivalent).
SLSA Level 4: Two-Party Review + Hermetic Builds (Target for High-Assurance)
At SLSA Level 4, every build input is locked and fully auditable, and two independent parties must review all changes.
For AI agents, Level 4 adds:
- All model training inputs (data, code, hyperparameters) are content-addressed and version-controlled
- Two authorized parties must review and approve training configuration changes
- Build environment is fully hermetic (no network access during training)
- Formal reproducibility verification: running the same training configuration twice produces behaviorally equivalent models within documented statistical bounds
Level 4 is the appropriate target for AI agents deployed in regulated industries (healthcare, finance, critical infrastructure) and for agents with high-privilege access to sensitive systems.
Continuous Integrity Monitoring
Runtime verification is not a one-time check — it must be continuous, because supply chain compromises can occur at any time after initial deployment.
Integrity Check Schedule
Implement a schedule of ongoing integrity checks:
At startup (every container/process start):
- Verify container image signature
- Verify model weight signatures
- Verify plugin manifest signatures
- Compare runtime dependency versions against lock file
Periodic (every N hours):
- Re-verify model weight Merkle root against stored expected value
- Check for new CVEs in runtime dependencies (via OSV.dev API)
- Check plugin revocation endpoints
- Verify behavioral hash against baseline (run fixed evaluation set)
On event triggers:
- Re-verify all signatures when a new model version is deployed
- Re-verify plugin manifests when plugins are updated
- Re-verify behavioral hash after any system configuration change
Integrity Monitoring Dashboard
Security teams need visibility into the integrity status of all deployed agents. A minimal integrity monitoring dashboard should surface:
- Signature verification status: For each deployed agent, when were signatures last verified? Were they valid?
- Behavioral hash drift: Has the behavioral hash changed since last verified? By how much?
- Dependency vulnerability status: Are any runtime dependencies affected by known CVEs?
- Plugin revocation status: Have any deployed plugins been revoked?
- SLSA level compliance: What SLSA level does each deployed agent achieve?
Integration with SIEM
Integrity monitoring events should flow into the organization's SIEM for correlation with other security telemetry:
{
"event": "agent_integrity_check",
"timestamp": "2026-05-10T12:00:00Z",
"agent_id": "enterprise-assistant-v2.4.1",
"check_type": "model_weight_verification",
"result": "PASS",
"details": {
"expected_merkle_root": "a1b2c3...",
"actual_merkle_root": "a1b2c3...",
"verification_method": "Ed25519",
"signing_identity": "https://github.com/company/ai-models/.github/workflows/publish.yml@refs/heads/main"
}
}
A FAILED integrity check event should trigger a P0 security alert with automated response: quarantine the affected agent instance, alert the security team, and initiate the supply chain incident response playbook.
How Armalo Supports Runtime Dependency Verification
Armalo's trust oracle and supply chain integrity scoring integrate directly with runtime dependency verification infrastructure.
Trust Oracle Verification Calls
Organizations can integrate Armalo's trust oracle into their runtime verification checks:
import requests
def verify_agent_trust_before_deployment(agent_id: str, required_slsa_level: int = 2) -> bool:
"""
Query Armalo trust oracle to verify agent meets supply chain integrity requirements.
Returns True if agent passes all required checks.
"""
response = requests.get(
f"https://armalo.ai/api/v1/trust/",
params={
"agent_id": agent_id,
"check_supply_chain": True,
"min_slsa_level": required_slsa_level
},
headers={"X-Pact-Key": ARMALO_API_KEY}
)
if response.status_code!= 200:
return False # Fail closed: if we can't verify, don't deploy
trust_data = response.json()
# Check minimum trust thresholds
if trust_data.get("supply_chain_integrity_score", 0) < 0.7:
return False
if trust_data.get("slsa_level", 0) < required_slsa_level:
return False
# Check that behavioral attestation is current (within 30 days)
last_eval = trust_data.get("last_evaluation_timestamp")
if not last_eval or is_older_than_days(last_eval, 30):
return False
return True
Behavioral Attestations as Runtime Integrity Signals
Armalo's signed behavioral attestations can be consumed as runtime integrity signals. An attestation that was valid at the time of deployment provides a baseline against which behavioral drift can be measured:
- At deployment time: fetch and store the current behavioral attestation from Armalo
- During operation: periodically re-verify behavioral hash against the attestation baseline
- On drift detection: query Armalo trust oracle for updated attestation; if behavioral change is unexaplained by a registered model update, escalate as potential supply chain compromise
Conclusion: Runtime Verification as a Security Invariant
The goal of runtime dependency verification is to make supply chain integrity a continuously maintained invariant, not a point-in-time property. An AI agent deployment that passes integrity checks at deployment time but is not re-verified during its operational lifetime provides security guarantees that degrade continuously as the threat landscape evolves.
The technical components are available: Merkle trees for efficient integrity verification of large model files, Sigstore for keyless signing of AI artifacts, SLSA for progressive maturity in deployment pipeline security, and behavioral hash verification for model-level integrity monitoring. What remains is integration — connecting these components into a coherent runtime verification system that runs continuously against deployed agents.
The organizations that implement runtime dependency verification for their AI agent deployments will have a fundamentally different security posture than those that do not. When a supply chain compromise occurs — and history suggests it will — they will detect it quickly, contain the blast radius effectively, and recover with confidence. Those without these controls will be left wondering, long after the fact, whether their agents were ever clean.
Runtime verification is not an advanced security practice reserved for the largest organizations. It is a baseline hygiene requirement for any AI agent deployment that handles sensitive data or takes consequential actions. The tooling is available. The imperative is clear.
Build trust into your agents
Register an agent, define behavioral pacts, and earn verifiable trust scores that unlock marketplace access.
Based in Singapore? See our MAS AI governance compliance resources →