The trust layer between your AI and your business

Scale AI safely — with confidence, control, and proof.

DriftGard checks AI responses against your policies, scans for secrets and prompt injection, detects drift over time, and gives compliance teams tamper-evident audit evidence.

Policy Checks DLP Scanner Drift Monitoring Tamper-Evident Audit Human Review AU Privacy Act (APP 1.7) EU AI Act / ISO 42001 / NIST

AI adoption stalls when teams can't trust it

The biggest blocker to wider AI adoption isn't capability — it's confidence. Managers worry about wrong answers, compliance teams worry about policy breaches, and operators worry about the hidden effort of checking what the AI says.

What slows adoption down

  • AI gives answers that sound good but are wrong or incomplete
  • Teams can't see whether policy or brand rules were followed
  • Small model or prompt changes create behaviour drift over time
  • Manual review eats into the savings AI was supposed to deliver
  • No way to prove to an auditor that records haven't been changed
DriftGard closes the trust gap

Not just monitoring — operational control over AI behaviour. Not just safety scores — audit-ready evidence tied to versioned rules. Not just one-time testing — ongoing oversight that catches drift. And not just logging — cryptographic proof that your compliance records are intact.

How DriftGard works

Simple enough for decision makers. Powerful enough for production AI.

1

Set your rules

Turn policies, internal controls, or regulatory obligations into versioned guardrails. Upload a policy document and DriftGard generates a draft control pack.

2

Evaluate AI behaviour

Test manual prompts, review historical logs, or monitor live responses against those guardrails — with DLP scanning for secrets and prompt injection.

3

Spot drift early

See when answer quality, policy alignment, or risk levels shift after a model or workflow change — with statistical anomaly detection.

4

Alert the right people

Notify teams via Slack, email, or webhooks when risk spikes, rules are broken, or behaviour changes unexpectedly.

5

Focus humans where needed

Route uncertain or high-risk cases for human review. Track reviewer decisions and measure machine-human agreement.

6

Prove it with evidence

Generate compliance reports mapped to EU AI Act, ISO 42001, NIST AI RMF, or Australian Privacy Act. Every record is cryptographically hashed for tamper-evident verification.

What the platform does

From policy definition to live monitoring, audit evidence, and tamper-proof integrity — practical tools for every stage of AI rollout.

Control Packs

Versioned policy rulesets with severity levels, blocking thresholds, PII rules, DLP config, and evaluation modes. Generate drafts from plain-text policy documents.

DLP Scanner

Scans prompts and responses for 10+ PII types, 25+ secret patterns (AWS keys, Stripe, JWTs, DB connection strings), and 15+ adversarial patterns (jailbreak, prompt injection, encoding attacks).

Drift Monitoring

Compares baseline and current windows across violation rates, severity changes, and blocked-response rates. Statistical anomaly detection (z-score) catches subtle drift.

Batch & Backtests

Upload CSV logs for bulk evaluation. Re-run historical data against new control pack versions to measure policy impact before activating changes.

Synthetic Runs

Auto-generate test prompts from your control pack rules, send them to your AI endpoint, and evaluate the responses. Schedule recurring runs for continuous red-teaming.

Human Review (HITL)

Route high-risk or low-confidence evaluations to a human review queue. Track reviewer decisions, measure agreement rates, and build benchmark datasets.

Compliance Reports

Generate PDF evidence reports mapped to EU AI Act, ISO 42001, NIST AI RMF, and Australian Privacy Act (APP 1.7). Schedule auto-generation on regulatory deadlines.

APP 1.7 Statement

Generate the automated decision-making transparency statement required under Australian Privacy Act APP 1.7 — directly from your active control pack configuration.

Audit Integrity

Every evaluation decision is SHA-256 hashed at write time. Hourly Merkle roots provide a second layer. On-demand verification proves records haven't been tampered with.

What managers and stakeholders see

Where AI is too risky to expand

Which topics and response types generate the most violations.

Whether quality is improving

Violation rates, risk scores, and blocked-response trends over time.

If a change made things worse

Before/after comparison with drift metrics and anomaly detection.

Security and governance by design

Enterprise buyers review security early. DriftGard is designed around tenant isolation, data protection, and provable integrity.

DLP Scanner

  • 10+ PII types: email, phone, TFN, SSN, credit card, Medicare, passport
  • 25+ secret patterns: AWS, Stripe, GitHub, OpenAI, JWTs, DB strings
  • 15+ adversarial patterns: jailbreak, prompt injection, base64, homoglyphs
  • Separate prompt/response scanning with independent blocking

Tamper-Evident Integrity

  • SHA-256 per-record hashing on all decisions
  • Hourly Merkle roots for evaluation records
  • On-demand verification with date range selection
  • Covers evaluations, control packs, and compliance reports

Access & Isolation

  • Multi-tenant with org → project binding
  • Role-based access (admin / viewer)
  • Configurable retention and data minimisation
  • Australian-hosted options available

Use cases across industries and teams

Same platform. Different guardrails depending on your context.

Customer Support AI

Scale AI without losing trust in customer-facing answers. Monitor quality, catch failures, reduce manual QA.

Financial Services AI

Monitor advice boundaries, disclosure issues, and compliance-sensitive patterns. Audit-ready evidence for regulators.

Gambling / Wagering AI

Track inducements, harm signals, and escalation gaps. Responsible gambling guardrails with behaviour-change monitoring.

Internal Copilots

Give legal, HR, and operations teams confidence in internal AI tools with policy checks and drift visibility.

Compliance & Risk Teams

Evidence for procurement, governance reviews, and regulatory submissions. Not just dashboards — defensible proof.

Product & Engineering

Ship changes faster with backtests, synthetic runs, A/B experiments, and CI/CD benchmark checks.

Works with your AI stack

DriftGard fits into existing workflows without a platform rebuild.

Input sources

  • Node.js SDK (@driftgard/node)
  • Python SDK (driftgard)
  • REST API with API key auth
  • CSV batch uploads
  • Synthetic target APIs

Notifications

  • Slack alerts (Block Kit)
  • Email notifications
  • Webhooks (JSON + HMAC signed)
  • Per-evaluation real-time webhooks

Output formats

  • EU AI Act / ISO 42001 / NIST AI RMF / AU Privacy Act PDFs
  • APP 1.7 transparency statement
  • CSV and JSON exports
  • CI/CD pass/fail endpoint
  • CLI for pipeline integration

Pricing

Start with a pilot audit. Scale into ongoing monitoring as your AI rollout grows.

Pilot AuditFixed scope
AUD $2,500
Prove fit quickly using existing logs.
  • Batch evaluation on supplied logs
  • Control Pack setup
  • Evidence export pack
  • Guided review
Run a pilot audit
ComplianceMost popular
From AUD $7,500/mo
Ongoing monitoring and governance for production AI.
  • Drift monitoring + alerts
  • DLP scanner (PII, secrets, adversarial)
  • Audit integrity verification
  • Backtests and reporting
  • AU Privacy Act (APP 1.7) reports
  • Audit logs and exports
Request a demo
EnterpriseCustom
Contact us
Advanced workflows, enterprise deployment, dedicated support.
  • Everything in Compliance
  • HITL workflows + benchmark suite
  • Synthetic runs + scheduling
  • A/B experiments across models
  • Tamper-evident Merkle root verification
  • APP 1.7 statement generator
  • Node.js + Python SDKs + CLI
  • CI/CD integration
  • Optional SSO / SAML
  • Dedicated support
Talk to sales

FAQ

Is DriftGard a runtime blocking layer?

DriftGard supports both real-time evaluation via SDKs and post-response workflows including batch audits, synthetic testing, and drift monitoring. Teams can start post-response and add real-time checks as needed.

What is a Control Pack?

A versioned ruleset defining acceptable AI behaviour — policies, severities, thresholds, DLP config, enforcement logic, and retention settings.

What does the DLP scanner detect?

10+ PII types (email, phone, TFN, SSN, credit cards), 25+ secret patterns (AWS keys, Stripe, GitHub tokens, JWTs, DB connection strings), and 15+ adversarial patterns (jailbreak, prompt injection, base64 encoding, homoglyph substitution).

How does audit integrity work?

Every evaluation decision is SHA-256 hashed at write time. Hourly Merkle roots cover evaluation records. On-demand verification recomputes hashes and compares against stored values — a mismatch means the record was modified after creation.

What compliance frameworks do you support?

EU AI Act (Articles 9–15), ISO 42001 (Clauses 6.1–10.1), NIST AI RMF (Govern, Map, Measure, Manage), and Australian Privacy Act (APP 1, 1.7, 3, 6, 11, 12). Reports are generated as PDFs with signed download links.

Can we test historical logs?

Yes. Batch Jobs evaluate historical AI responses at scale. Backtests re-run them against updated control pack versions to measure policy impact.

Do you support human review?

Yes. HITL workflows route high-risk cases into review queues with reason codes, overrides, notes, and audit history. Reviewer decisions feed into benchmark datasets.

How quickly can we start?

A pilot audit takes days, not months. Upload existing logs, get a clear picture of current AI risk, and establish a baseline before committing to ongoing monitoring.

Scale AI across your business — with confidence, control, and proof

Trust your AI — with proof.

Get started

Start with a demo or a pilot audit on existing logs.

Request demo / pilot audit

What you'll see

  • How guardrails are defined from policy documents
  • DLP scanning for secrets and prompt injection
  • Drift detection and alerting
  • Tamper-evident audit integrity verification
  • Compliance report generation (EU AI Act, AU Privacy Act)

Common starting point

Upload historical logs, run a pilot audit, establish a baseline — before enabling ongoing monitoring.