Vern Labs is a security lab building runtime protection, agent authorization, and adversarial testing infrastructure for AI systems deployed in production. Used by teams shipping AI in defense, finance, and healthcare.
Each product deploys independently. Together they form a unified control plane for every AI system in your organization.
Intertrace operates as a provider-agnostic gateway between your application and any LLM, embedding model, or tool server. Every prompt, retrieval, output, and tool call is inspected against policy at runtime.
// Drop-in proxy — every call is inspected at runtime. import { Intertrace } from "@vern/intertrace"; const vern = new Intertrace({ endpoint: process.env.VERN_ENDPOINT, policy: "prod.strict", on: { block: (evt) => audit.push(evt), }, }); // Your existing call stays the same. const res = await vern.openai.chat.completions.create({ model: "gpt-5-reasoning", messages, tools, }); // Blocked calls surface as structured signals. if (res.vern.action === "block") { log.warn({ reason: res.vern.rule }); }
Ghostline issues scoped capability tokens for every tool, resource, and external call an agent can make. High-impact actions are gated behind human approval with full audit trail.
Blackbox runs continuous adversarial evaluations against copilots, agents, and AI applications — producing severity-ranked findings with reproducible transcripts and an exportable coverage report.
CATEGORY PASS FAIL COVERAGE ──────────────────────────────────────────── injection · direct 18 2 ████████████░░░░ 91% injection · indirect 11 3 █████████░░░░░░░ 76% jailbreak · persona 14 1 ██████████████░░ 93% pii · exfil 09 0 ████████████████ 100% tool · misuse 07 5 ███████░░░░░░░░░ 58% privilege · abuse 12 2 ██████████████░░ 86% data · leak 15 0 ████████████████ 100% ──────────────────────────────────────────── TOTAL 86 13 OVERALL COVERAGE 87%
Vern Labs sits between your application and the models, agents, and tools it depends on. Every surface is observable, scopable, and testable.
┌─────────────────────────────────────────────────────────────┐
│ APPLICATION LAYER │
│ copilots · internal agents · workflows · tools │
└──────────────────────────┬──────────────────────────────────┘
│ requests / streams
▼
──────────────────────────────────────────────────────────────────────────
│ VERN LABS CONTROL PLANE │
│ │
│ [01] INTERTRACE ─ inline inspection ─ policy engine │
│ prompts · outputs · tools │
│ │
│ [02] GHOSTLINE ─ capability tokens ─ approval gate │
│ scope per agent / per tool │
│ │
│ [03] BLACKBOX ─ adversarial runs ─ coverage rpt. │
│ pre-launch · continuous │
│ │
│ ────────────────────── AUDIT LEDGER ────────────────────── │
│ append-only · signed · SIEM export │
──────────────────────────────────────────────────────────────────────────
│
▼
┌─────────────────────────────────────────────────────────────┐
│ MODELS · AGENTS · TOOLS │
│ OpenAI · Anthropic · open weights · MCP │
└─────────────────────────────────────────────────────────────┘
Based on internal evaluation across feature surface, deployment flexibility, and end-to-end coverage. Category labels generalize over specific vendors in each segment.
VERN PROMPT-FW REDTEAM-SVC IN-HOUSE ─────────────────────────────────────────────────────────────────────────────── runtime inspection ●●●●● ●●●○○ ●○○○○ ●●○○○ agent authorization ●●●●● ●○○○○ ○○○○○ ●○○○○ adversarial testing ●●●●● ○○○○○ ●●●●○ ●●○○○ unified control plane ●●●●● ●●○○○ ●○○○○ ○○○○○ self-host · air-gap ●●●●● ●●○○○ ●○○○○ ●●●●● audit + SIEM export ●●●●● ●●○○○ ●●○○○ ●●○○○ open primitives · research ●●●●● ●○○○○ ●●○○○ ○○○○○ ─────────────────────────────────────────────────────────────────────────────── time to first signal < 1 day 1–2 wks 2–4 wks 3–6 mo
Vern Labs publishes research on how AI systems fail in the wild — and open-sources the primitives that help teams defend against those failures.
For teams evaluating a single product on a bounded workload.
For teams running AI in production with real users and real risk.
For regulated industries, defense, and air-gapped environments.
Vern Labs was founded by operators with backgrounds in federal cybersecurity, enterprise cloud security, and applied AI research.
Cybersecurity at NASA. TS/SCI cleared. Previously at Raytheon and a U.S. Army veteran. Serves on the Y Combinator board. A decade securing systems where the cost of a breach is measured in lives, not dashboards.
Security engineering at Microsoft, Wiz, and Google. Has built cloud security platforms that protect tens of thousands of enterprise environments. Came to Vern Labs to solve the problem the next decade of software is actually built on.
Talk to Vern Labs about securing your AI systems before they become your next attack surface.
Intertrace, Ghostline, and Blackbox are independent products with a shared control plane. Deploy one. Deploy all three. They are designed to work together but do not require each other.
Intertrace is a provider-agnostic gateway that sits between your application and any LLM, embedding model, retrieval layer, or tool server. Every request flows through a policy engine that inspects prompts, outputs, tool calls, and retrieved context against your rule set at runtime.
It is deployed as a single stateless container. Policy changes propagate in under two seconds. Streaming responses are fully supported with inline policy checks that run in parallel with the provider call — adding roughly 18ms at the median.
// Drop-in proxy — every call is inspected at runtime. import { Intertrace } from "@vern/intertrace"; const vern = new Intertrace({ endpoint: process.env.VERN_ENDPOINT, policy: "prod.strict", on: { block: (evt) => audit.push(evt), }, }); // Your existing call stays the same. const res = await vern.openai.chat.completions.create({ model: "gpt-5-reasoning", messages, tools, }); // Blocked calls surface as structured signals. if (res.vern.action === "block") { log.warn({ reason: res.vern.rule }); }
Ghostline issues scoped capability tokens for every tool, resource, and external call an agent can make. High-impact actions route through approval queues where a human or policy evaluates each request before execution.
Tokens use the Biscuit format with custom claim extensions. Revocation is real-time and cascading — pulling a token invalidates every derived scope in flight across every running agent.
Blackbox runs continuous adversarial evaluations against copilots, agents, and AI applications. Every run produces severity-ranked findings, reproducible transcripts, and an exportable coverage report ready for audit and compliance review.
The suite includes OWASP LLM Top 10 plus Vern Labs' proprietary attack set, updated weekly by the research team. Every vector is versioned and deterministic so you can compare results run-to-run.
CATEGORY PASS FAIL COVERAGE ──────────────────────────────────────────── injection · direct 18 2 ████████████░░░░ 91% injection · indirect 11 3 █████████░░░░░░░ 76% jailbreak · persona 14 1 ██████████████░░ 93% pii · exfil 09 0 ████████████████ 100% tool · misuse 07 5 ███████░░░░░░░░░ 58% privilege · abuse 12 2 ██████████████░░ 86% data · leak 15 0 ████████████████ 100% ──────────────────────────────────────────── TOTAL 86 13 OVERALL COVERAGE 87%
Vern Labs runs as a single stateless container that you deploy inside your VPC, on-premises, or as a fully air-gapped instance. Everything about the architecture is designed around three constraints: low latency, no data retention, and no surprise dependencies.
Vern Labs sits between your application and the models, agents, and tools it depends on. Every surface is observable, scopable, and testable from one place.
┌─────────────────────────────────────────────────────────────┐
│ APPLICATION LAYER │
│ copilots · internal agents · workflows · tools │
└──────────────────────────┬──────────────────────────────────┘
│ requests / streams
▼
──────────────────────────────────────────────────────────────────────────
│ VERN LABS CONTROL PLANE │
│ │
│ [01] INTERTRACE ─ inline inspection ─ policy engine │
│ prompts · outputs · tools │
│ │
│ [02] GHOSTLINE ─ capability tokens ─ approval gate │
│ scope per agent / per tool │
│ │
│ [03] BLACKBOX ─ adversarial runs ─ coverage rpt. │
│ pre-launch · continuous │
│ │
│ ────────────────────── AUDIT LEDGER ────────────────────── │
│ append-only · signed · SIEM export │
──────────────────────────────────────────────────────────────────────────
│
▼
┌─────────────────────────────────────────────────────────────┐
│ MODELS · AGENTS · TOOLS │
│ OpenAI · Anthropic · open weights · MCP │
└─────────────────────────────────────────────────────────────┘
Fastest path to production. Vern Labs operates the infrastructure; your data stays in our US / EU regions.
Vern operates the control plane; your LLM traffic and audit data never leaves your network.
Full Vern stack in your AWS, GCP, or Azure VPC. Complete data control. Most common for finance and healthcare.
Complete deployment inside classified or disconnected environments. No outbound dependencies.
Every LLM call routed through Intertrace follows the same six-stage pipeline. Stages run in parallel where safe, and the entire hot path is under 20ms at p50 for text payloads under 8KB.
Vern Labs is built by people who've held TS/SCI clearances and shipped enterprise security at scale. Our architecture is designed around the assumption that you should never have to trust us with anything we don't strictly need.
Vern Labs doesn't ship lock-in. Intertrace works with any LLM provider via the OpenAI-compatible interface, Anthropic's Messages API, or as a transparent HTTP proxy. Ghostline integrates with major agent frameworks or through a low-level policy API.
Vern Labs publishes research on how AI systems fail in the wild and open-sources primitives that help teams defend against those failures. Everything we learn from our own deployments and red team engagements becomes a public artifact — papers, notes, benchmark sets, and reference implementations.
We propose a behavioral risk classifier that scores agent actions in real time based on capability context, target resource sensitivity, and historical deviation. Evaluated across 12 production agent deployments with a 41% reduction in escalations required.
Reframing prompt injection through the lens of software supply chain attacks. Proposes provenance tracking for every context token that reaches a production model.
We open-source the primitives that help the whole ecosystem get better at securing AI. MIT licensed, production-ready, welcoming contribution.
A living benchmark of 140+ attack vectors across injection, jailbreak, privilege abuse, and exfiltration. Weekly curated by the Vern research team.
CLI tool for inspecting LLM traffic on your dev machine. Pipe through <command> and see every request and response with policy markup.
Go library for issuing, validating, and cascading Biscuit-format capability tokens for autonomous agent authorization.
Reference Rego policies for common LLM security rules. Drop into OPA or Intertrace. Covers injection detection, PII redaction, and tool abuse heuristics.
Papers, notes, and open-source releases — no other mail. You can unsubscribe any time.
Vern Labs is priced to match the way teams actually adopt AI security — starting with a single workload and expanding as the risk surface grows. Pilots are free. Production is usage-based. Enterprise is negotiated.
For teams evaluating a single product on a bounded workload.
Usage-based pricing. Volume discounts kick in at scale.
Regulated, defense, and air-gapped environments. Negotiated terms.
PILOT PRODUCTION ENTERPRISE ─────────────────────────────────────────────────────────────────────── request volume 100k/mo unlimited unlimited products included one all three all three deployment — cloud ● ● ● deployment — self-host (VPC) ○ ● ● deployment — air-gap ○ ○ ● policy rules standard standard + custom custom adversarial test suite OWASP + Vern custom + classified support channel email slack 4h SLA 24/7 IR SOC 2 / MSA documentation ○ ● ● FedRAMP · CMMC alignment ○ ○ ● dedicated solutions architect ○ ○ ● single-tenant deployment ○ ○ ● ─────────────────────────────────────────────────────────────────────── contract length 30 days 12 mo · mo custom
Vern Labs is a cybersecurity research and product company based in the United States. We were founded in 2024 by operators with backgrounds in federal cybersecurity, enterprise cloud security, and applied AI research. The mission is simple: build the security infrastructure that the next decade of software will actually depend on.
Traditional security tools were designed for software that does what you tell it to. Modern AI systems reason, retrieve, call tools, and act autonomously — and the attack surface that creates is new, wide, and actively being exploited. We think the next decade of software will run on this substrate. Someone needs to build the security layer for it.
That's the work.
Cybersecurity at NASA, where he held TS/SCI clearance and worked on defensive systems for flight-critical and classified workloads. Previously at Raytheon, and a U.S. Army veteran. Serves on the Y Combinator board.
Sam started Vern Labs because he spent a decade watching defense and enterprise teams treat AI like just another API — when the actual threat model is closer to adding a new autonomous agent to an organization.
Security engineering at Microsoft, Wiz, and Google. Has shipped cloud security platforms that protect tens of thousands of enterprise environments and internal production systems at hyperscale.
Joined Vern Labs to solve the problem the next decade of software will actually run on. Leads the research team and owns the architecture of the Vern control plane.
Small team, high stakes, serious work. We hire exclusively for calibration, taste, and raw technical ability. We pay top-of-market. We ship in writing.
A real person on our team reviews every inbound. Most messages get a response within four business hours. For urgent security matters, use the hotline below.
Short notes are fine — a sentence on what you're building and where you're stuck is enough to route you to the right person. We'll come back with a 20-minute slot and a tailored reading list before the call.
Use the right channel for your question and you'll get a faster, better answer.
Pricing, procurement, pilots, reseller questions, volume deals.
sales@vernlabs.comArchitecture deep-dives, deployment planning, integration questions.
engineering@vernlabs.comPaper collaborations, benchmark contributions, academic partnerships.
research@vernlabs.comIf you've identified a vulnerability in a Vern Labs product, deployment, or research artifact — we want to hear from you first, and we'll work with you to coordinate disclosure.
We operate a formal responsible disclosure program and publicly acknowledge researchers with permission. Critical reports get a response within 24 hours, any day of the week.