VL–OPS / 04.17.26
LIVE
Filing / 001–A
Category
Jurisdiction
Revision
Issue date
VL-SYS-001
Security infra
Global
0.4.2 — in service
17 APR 2026
§ 001

Security infrastructure
for systems that reason,
retrieve, and act.

Vern Labs is a security lab building runtime protection, agent authorization, and adversarial testing infrastructure for AI systems deployed in production. Used by teams shipping AI in defense, finance, and healthcare.

Deployment
Self-host / VPC / cloud
Gateway p50
~18ms inline
Compliance
SOC 2 / FedRAMP-align
Providers
Any LLM · OSS · MCP
Isolation
Air-gap capable
Retention
Zero by default
— Built with alumni from
Y Combinator NASA Microsoft Wiz Raytheon Google
§ 003
Intertrace
Runtime security gateway · in service

Inline inspection of every AI transaction.

Intertrace operates as a provider-agnostic gateway between your application and any LLM, embedding model, or tool server. Every prompt, retrieval, output, and tool call is inspected against policy at runtime.

p50 overhead
~18ms inline · streaming preserved
Policy engine
Rego + custom detectors
Detectors
Injection · PII · jailbreak · exfil
Integrations
Python · Node · Go · HTTP proxy
Audit
Immutable · SIEM export
vern/intertrace.ts
SDK · 0.4.2
// Drop-in proxy — every call is inspected at runtime.
import { Intertrace } from "@vern/intertrace";

const vern = new Intertrace({
  endpoint: process.env.VERN_ENDPOINT,
  policy:   "prod.strict",
  on: {
    block: (evt) => audit.push(evt),
  },
});

// Your existing call stays the same.
const res = await vern.openai.chat.completions.create({
  model: "gpt-5-reasoning",
  messages,
  tools,
});

// Blocked calls surface as structured signals.
if (res.vern.action === "block") {
  log.warn({ reason: res.vern.rule });
}
READY · signed · 04.17.26
— copy
— INTERTRACE / TRAFFIC TRACE
LIVE
Signal 01 · inbound
p50 · 18ms
§ 004
Ghostline
Agent authorization · in service
Scope graph · agent.ops.runner
State · gated
BOUNDARY · TENANT SCOPE · READ SCOPE · ACT ROOT ACTION · DELETE AWAITING APPROVAL QUEUED · 00:12 X · 0.000 Y · 0.000 Z · SCOPE T · 04.17.26
4 of 12 scopes granted
1 action awaiting approval

Authorization at the action layer.

Ghostline issues scoped capability tokens for every tool, resource, and external call an agent can make. High-impact actions are gated behind human approval with full audit trail.

Token format
Biscuit / custom claims
Approval modes
Inline · async · policy-only
Agent frameworks
LangChain · LangGraph · MCP · custom
Revocation
Real-time · cascading
Audit
Append-only ledger
§ 005
Blackbox
Adversarial testing harness · in service

Stress your AI before adversaries do.

Blackbox runs continuous adversarial evaluations against copilots, agents, and AI applications — producing severity-ranked findings with reproducible transcripts and an exportable coverage report.

Suite
OWASP LLM Top 10 + Vern custom
Vectors
140+ · updated weekly
Runs
Pre-launch · scheduled · on PR
Reports
PDF · JSON · SARIF
Determinism
Seeded · replayable
Run · blackbox-4182
Coverage report
CATEGORY          PASS  FAIL  COVERAGE
────────────────────────────────────────────
injection · direct   18     2    ████████████░░░░  91%
injection · indirect 11     3    █████████░░░░░░░  76%
jailbreak · persona  14     1    ██████████████░░  93%
pii · exfil          09     0    ████████████████ 100%
tool · misuse        07     5    ███████░░░░░░░░░  58%
privilege · abuse    12     2    ██████████████░░  86%
data · leak          15     0    ████████████████ 100%
────────────────────────────────────────────
TOTAL                86    13    OVERALL COVERAGE  87%
Findings
13 total · 4 high
Runtime
6m 44s
Report
report-4182.pdf
Illustrative — replace with your own surface
§ 006
Architecture
System topology

A single control plane for three layers of defense.

Vern Labs sits between your application and the models, agents, and tools it depends on. Every surface is observable, scopable, and testable.

Deploy
VPC · hybrid · cloud · air-gap
Latency
p50 18ms · p99 45ms
Observability
OTEL · SIEM · S3 audit
FIG 006.01 — System diagram
scale · 1:1
            ┌─────────────────────────────────────────────────────────────┐
            │                  APPLICATION  LAYER                         │
            │   copilots  ·  internal agents  ·  workflows  ·  tools      │
            └──────────────────────────┬──────────────────────────────────┘
                                       │  requests / streams──────────────────────────────────────────────────────────────────────────
                      VERN  LABS  CONTROL  PLANE                        
                                                                       
    [01] INTERTRACE   ─ inline inspection          ─ policy engine  
                        prompts · outputs · tools                     
                                                                       
    [02] GHOSTLINE    ─ capability tokens           ─ approval gate  
                        scope per agent / per tool                    
                                                                       
    [03] BLACKBOX     ─ adversarial runs            ─ coverage rpt.  
                        pre-launch · continuous                        
                                                                       
    ──────────────────────  AUDIT LEDGER  ──────────────────────       
              append-only  ·  signed  ·  SIEM export                   
 ──────────────────────────────────────────────────────────────────────────
                                       │
                                       ▼
            ┌─────────────────────────────────────────────────────────────┐
            │              MODELS  ·  AGENTS  ·  TOOLS                    │
            │         OpenAI · Anthropic · open weights · MCP             │
            └─────────────────────────────────────────────────────────────┘
Footprint
1 container · < 300MB
HA
Stateless · horizontal
Deploy
Docker · Helm · Terraform
§ 007
Assessment
Coverage vs. adjacent categories

Capability overlap with adjacent categories.

Based on internal evaluation across feature surface, deployment flexibility, and end-to-end coverage. Category labels generalize over specific vendors in each segment.

                                    VERN      PROMPT-FW    REDTEAM-SVC   IN-HOUSE
───────────────────────────────────────────────────────────────────────────────
runtime inspection                    ●●●●●        ●●●○○          ●○○○○        ●●○○○
agent authorization                   ●●●●●        ●○○○○          ○○○○○        ●○○○○
adversarial testing                   ●●●●●        ○○○○○          ●●●●○        ●●○○○
unified control plane                 ●●●●●        ●●○○○          ●○○○○        ○○○○○
self-host · air-gap                   ●●●●●        ●●○○○          ●○○○○        ●●●●●
audit + SIEM export                   ●●●●●        ●●○○○          ●●○○○        ●●○○○
open primitives · research            ●●●●●        ●○○○○          ●●○○○        ○○○○○
───────────────────────────────────────────────────────────────────────────────
time to first signal                 < 1 day      1–2 wks        2–4 wks       3–6 mo
Category ratings · Vern Labs internal evaluation · Q1 2026
§ 008
Research
Papers · notes · primitives
§ 009
Terms
Engagement tiers

Start with a pilot. Scale to production. Negotiate for enterprise.

Tier / 01

Pilot

30 days
$0

For teams evaluating a single product on a bounded workload.

  • + 100k requests / mo
  • + One product of choice
  • + Cloud deployment
  • + Email support
Start pilot →
MOST TEAMS
Tier / 02

Production

Annual
Custom

For teams running AI in production with real users and real risk.

  • + Usage-based · unlimited
  • + All three products
  • + Self-host or cloud
  • + Slack channel · 4hr SLA
  • + SOC 2 reports · MSA
Talk to sales
Tier / 03

Enterprise

● RESTRICTED
Contact

For regulated industries, defense, and air-gapped environments.

  • + Air-gap · on-prem
  • + FedRAMP · CMMC align
  • + Dedicated SA
  • + Custom SLAs · red team
  • + White-glove onboarding
Request brief →
§ 010
The Lab
Founding team · personnel

Built by engineers who've shipped security at scale.

Vern Labs was founded by operators with backgrounds in federal cybersecurity, enterprise cloud security, and applied AI research.

SO

Sam Oyan

Co-founder · CEO

Cybersecurity at NASA. TS/SCI cleared. Previously at Raytheon and a U.S. Army veteran. Serves on the Y Combinator board. A decade securing systems where the cost of a breach is measured in lives, not dashboards.

NASA
Raytheon
U.S. Army
YC Board
HR

H. Raef

Co-founder · CTO

Security engineering at Microsoft, Wiz, and Google. Has built cloud security platforms that protect tens of thousands of enterprise environments. Came to Vern Labs to solve the problem the next decade of software is actually built on.

Microsoft
Wiz
Google
§ 011
Trust
Security posture · compliance
Attestation
SOC 2 Type II
In progress · Q2 2026
Deployment
Self-hostable
Your VPC · full control
Isolation
Air-gap ready
Defense · classified
Data policy
Zero retention
Opt-in telemetry only
§ 012
FAQ
Questions from operators and buyers

If it isn't here, ask us directly.

Send a question →
01

How is Vern Labs different from traditional security tools?

Traditional tools inspect network traffic and code. Vern Labs inspects AI behavior — prompts, outputs, tool calls, agent actions — at runtime. Our products are designed for systems that reason and act autonomously, not static software.
02

Do I need to deploy all three products?

No. Intertrace, Ghostline, and Blackbox are independent. Most teams start with one — typically Intertrace for runtime inspection or Blackbox for pre-launch testing — and expand from there.
03

Which AI providers does Vern Labs support?

Intertrace operates as a provider-agnostic gateway and supports major LLM providers out of the box. Ghostline integrates at the agent framework layer. Blackbox tests any model, agent, or AI app with an accessible interface.
04

What about latency?

Intertrace adds sub-20ms overhead at the median. Policy enforcement happens inline, in parallel with the provider call. Streaming is fully supported.
05

Can I self-host?

Yes. Enterprise customers can deploy Vern Labs entirely within their own VPC, including air-gapped environments for classified workloads.
06

Is Vern Labs suitable for regulated industries?

Yes. Our architecture is built for defense, finance, and healthcare — with full audit logging, scoped data handling, and support for air-gapped deployments.
§ 013
Contact
Direct line — response within 4 hours

Build with AI.
Ship with control.

Talk to Vern Labs about securing your AI systems before they become your next attack surface.

Direct intake · VL-CT-001
Encrypted in transit
SYSTEM ONLINE
Avg. response · < 4h
Filing
Products
Revision
Issue
VL-PRD-100
3 in service
0.4.2
17 APR 2026
§ 100
Product index

Three products.
One control plane.

Intertrace, Ghostline, and Blackbox are independent products with a shared control plane. Deploy one. Deploy all three. They are designed to work together but do not require each other.

§ 101
Intertrace
Runtime security gateway · in service

Inline inspection of every AI transaction.

Intertrace is a provider-agnostic gateway that sits between your application and any LLM, embedding model, retrieval layer, or tool server. Every request flows through a policy engine that inspects prompts, outputs, tool calls, and retrieved context against your rule set at runtime.

It is deployed as a single stateless container. Policy changes propagate in under two seconds. Streaming responses are fully supported with inline policy checks that run in parallel with the provider call — adding roughly 18ms at the median.

p50 overhead
~18ms · streaming preserved
Policy engine
Rego + custom detectors
Detectors
Injection · PII · jailbreak · exfil
Integrations
Python · Node · Go · HTTP proxy
Audit
Immutable · SIEM export
vern/intertrace.ts
SDK · 0.4.2
// Drop-in proxy — every call is inspected at runtime.
import { Intertrace } from "@vern/intertrace";

const vern = new Intertrace({
  endpoint: process.env.VERN_ENDPOINT,
  policy:   "prod.strict",
  on: {
    block: (evt) => audit.push(evt),
  },
});

// Your existing call stays the same.
const res = await vern.openai.chat.completions.create({
  model: "gpt-5-reasoning",
  messages,
  tools,
});

// Blocked calls surface as structured signals.
if (res.vern.action === "block") {
  log.warn({ reason: res.vern.rule });
}
What gets inspected
  • · user prompts
  • · system prompts
  • · retrieved context
  • · tool arguments
  • · model outputs
Actions available
  • · block
  • · redact
  • · transform
  • · escalate
  • · allow + log
§ 102
Ghostline
Agent authorization · in service
Scope graph · agent.ops.runner
State · gated
BOUNDARY · TENANT SCOPE · READ SCOPE · ACT ROOT ACTION · DELETE AWAITING APPROVAL QUEUED · 00:12
4 of 12 scopes granted
1 action awaiting approval

Authorization at the action layer.

Ghostline issues scoped capability tokens for every tool, resource, and external call an agent can make. High-impact actions route through approval queues where a human or policy evaluates each request before execution.

Tokens use the Biscuit format with custom claim extensions. Revocation is real-time and cascading — pulling a token invalidates every derived scope in flight across every running agent.

Token format
Biscuit / custom claims
Approval modes
Inline · async · policy-only
Frameworks
LangChain · LangGraph · MCP · custom
Revocation
Real-time · cascading
Audit
Append-only ledger
§ 103
Blackbox
Adversarial testing harness · in service

Stress your AI before adversaries do.

Blackbox runs continuous adversarial evaluations against copilots, agents, and AI applications. Every run produces severity-ranked findings, reproducible transcripts, and an exportable coverage report ready for audit and compliance review.

The suite includes OWASP LLM Top 10 plus Vern Labs' proprietary attack set, updated weekly by the research team. Every vector is versioned and deterministic so you can compare results run-to-run.

Suite
OWASP LLM Top 10 + Vern custom
Vectors
140+ · updated weekly
Runs
Pre-launch · scheduled · on PR
Reports
PDF · JSON · SARIF
Determinism
Seeded · replayable
Run · blackbox-4182
Coverage report
CATEGORY          PASS  FAIL  COVERAGE
────────────────────────────────────────────
injection · direct   18     2    ████████████░░░░  91%
injection · indirect 11     3    █████████░░░░░░░  76%
jailbreak · persona  14     1    ██████████████░░  93%
pii · exfil          09     0    ████████████████ 100%
tool · misuse        07     5    ███████░░░░░░░░░  58%
privilege · abuse    12     2    ██████████████░░  86%
data · leak          15     0    ████████████████ 100%
────────────────────────────────────────────
TOTAL                86    13    OVERALL COVERAGE  87%
Findings
13 · 4 high
Runtime
6m 44s
Report
report-4182.pdf
Illustrative — replace with your own surface
§ 104
Deployment
Typical engagement patterns

Where Vern Labs fits in the stack.

A
Customer-facing LLM apps
Inline content policy, PII redaction, jailbreak detection. Block unsafe output before it reaches an end user. Intertrace primary.
B
Internal autonomous agents
Scope every tool and resource call. Gate destructive actions behind human approval. Keep a replayable audit log. Ghostline primary, Intertrace secondary.
C
Pre-launch & continuous evaluation
Run adversarial tests on every model and agent before release. Prove coverage to compliance. Catch regressions on every PR. Blackbox primary.
D
Regulated & classified workloads
Air-gapped deployment, full audit ledger, FedRAMP-aligned architecture. Defense, finance, and healthcare use cases. All three products, self-hosted.
Filing
Version
Footprint
Issue
VL-ARC-200
0.4.2
< 300MB
17 APR 2026
§ 200
Architecture overview

Security infrastructure,
deployed on your terms.

Vern Labs runs as a single stateless container that you deploy inside your VPC, on-premises, or as a fully air-gapped instance. Everything about the architecture is designed around three constraints: low latency, no data retention, and no surprise dependencies.

§ 201
Topology
System diagram — control plane + data plane

A single control plane for three layers of defense.

Vern Labs sits between your application and the models, agents, and tools it depends on. Every surface is observable, scopable, and testable from one place.

Deploy
VPC · hybrid · cloud · air-gap
Latency
p50 18ms · p99 45ms
Observability
OTEL · SIEM · S3 audit
Footprint
1 container · < 300MB
HA
Stateless · horizontal
FIG 201.01 — System diagram
scale · 1:1
            ┌─────────────────────────────────────────────────────────────┐
            │                  APPLICATION  LAYER                         │
            │   copilots  ·  internal agents  ·  workflows  ·  tools      │
            └──────────────────────────┬──────────────────────────────────┘
                                       │  requests / streams──────────────────────────────────────────────────────────────────────────
                      VERN  LABS  CONTROL  PLANE                        
                                                                       
    [01] INTERTRACE   ─ inline inspection          ─ policy engine  
                        prompts · outputs · tools                     
                                                                       
    [02] GHOSTLINE    ─ capability tokens           ─ approval gate  
                        scope per agent / per tool                    
                                                                       
    [03] BLACKBOX     ─ adversarial runs            ─ coverage rpt.  
                        pre-launch · continuous                        
                                                                       
    ──────────────────────  AUDIT LEDGER  ──────────────────────       
              append-only  ·  signed  ·  SIEM export                   
 ──────────────────────────────────────────────────────────────────────────
                                       │
                                       ▼
            ┌─────────────────────────────────────────────────────────────┐
            │              MODELS  ·  AGENTS  ·  TOOLS                    │
            │         OpenAI · Anthropic · open weights · MCP             │
            └─────────────────────────────────────────────────────────────┘
Deploy
Docker · Helm · Terraform
Runtime
Rust core · Node SDK
Config
GitOps · yaml · env
§ 202
Deployment
Supported topologies

Four deployment modes. Same feature set.

01 · Cloud

Managed SaaS

Fastest path to production. Vern Labs operates the infrastructure; your data stays in our US / EU regions.

  • · Zero ops overhead
  • · 99.9% uptime SLA
  • · SOC 2 hosted
02 · Hybrid

Control plane + local data

Vern operates the control plane; your LLM traffic and audit data never leaves your network.

  • · Data residency enforced
  • · Shared policy surface
  • · Low egress overhead
03 · VPC

Self-hosted, your cloud

Full Vern stack in your AWS, GCP, or Azure VPC. Complete data control. Most common for finance and healthcare.

  • · Terraform modules
  • · Helm charts
  • · Your KMS keys
04 · AIR-GAP

Fully disconnected

Complete deployment inside classified or disconnected environments. No outbound dependencies.

  • · Offline updates
  • · Local CA trust chain
  • · FedRAMP / CMMC align
§ 203
Data flow
Request lifecycle

What happens when a request passes through Vern.

Every LLM call routed through Intertrace follows the same six-stage pipeline. Stages run in parallel where safe, and the entire hot path is under 20ms at p50 for text payloads under 8KB.

01
Ingress
TLS termination, request normalization, tenant identification.
02
Pre-flight inspection
Injection detectors, PII scanners, and policy evaluation run against the request payload before dispatch.
03
Capability check
Ghostline validates scoped capability tokens for any tool or agent operation embedded in the request.
04
Dispatch
Request forwarded to the upstream model or tool. Streaming responses begin immediately.
05
Output inspection
Output chunks inspected inline. Policy violations trigger block, redact, or transform actions before the client sees them.
06
Audit & export
Structured event written to the append-only ledger. Forwarded to your SIEM or object store.
§ 204
Trust
Security posture & compliance

Your data stays yours.

Vern Labs is built by people who've held TS/SCI clearances and shipped enterprise security at scale. Our architecture is designed around the assumption that you should never have to trust us with anything we don't strictly need.

Attestation
SOC 2 Type II
In progress · Q2 2026
Deployment
Self-hostable
Your VPC · full control
Isolation
Air-gap ready
Defense · classified
Data policy
Zero retention
Opt-in telemetry only
What we store by default
  • + Policy decisions (allow/block + rule id)
  • + Request metadata (timestamps, sizes, tenant)
  • + Hashed payload fingerprints
What we never store
  • × Raw prompt or response content
  • × User-identifiable data
  • × API keys or credentials
§ 205
Integrations
Upstream providers & frameworks

Provider-agnostic by design.

Vern Labs doesn't ship lock-in. Intertrace works with any LLM provider via the OpenAI-compatible interface, Anthropic's Messages API, or as a transparent HTTP proxy. Ghostline integrates with major agent frameworks or through a low-level policy API.

LLM
OpenAI
Anthropic
Google
AWS Bedrock
Azure OpenAI
Mistral
open weights
Agents
LangChain
LangGraph
AutoGen
CrewAI
MCP servers
custom frameworks
Observability
OTEL
Datadog
Splunk
Elastic
Sumo Logic
S3 / GCS audit
Identity
Okta
Azure AD
SAML 2.0
OIDC
SCIM
Secrets
AWS KMS
GCP KMS
HashiCorp Vault
HSM (PKCS#11)
Ops
Kubernetes
Docker
Terraform
Helm
GitOps
ArgoCD
— Next

Want the architecture brief with full deployment details?

Filing
Publications
Open source
Issue
VL-RES-300
18 · ongoing
4 repos
17 APR 2026
§ 300
Research publications

A lab,
not a vendor.

Vern Labs publishes research on how AI systems fail in the wild and open-sources primitives that help teams defend against those failures. Everything we learn from our own deployments and red team engagements becomes a public artifact — papers, notes, benchmark sets, and reference implementations.

§ 304
Focus areas
Active research directions

What we work on.

A
Runtime risk models
Behavioral classifiers that score prompts, outputs, and tool calls as they happen — without relying on the model's own judgment.
B
Agent control primitives
Scoped capability grants, approval queues, and cascading revocation for autonomous systems that reason and act.
C
Adversarial simulation
Continuous stress testing methodology that produces measurable, reproducible coverage reports suitable for audit.
D
Policy verification
Formal methods for proving properties of LLM security policies where the problem is tractable; empirical bounds where it isn't.
E
Provenance tracking
Following every token of context back to its source. Treating LLM context as a supply chain, with all the guarantees that implies.
— Want updates?

Research drops, first.

Papers, notes, and open-source releases — no other mail. You can unsubscribe any time.

Research mailing list
Filing
Tiers
Contract
Issue
VL-PRC-400
3
Annual · monthly
17 APR 2026
§ 400
Engagement terms

Start with a pilot.
Scale with your AI footprint.

Vern Labs is priced to match the way teams actually adopt AI security — starting with a single workload and expanding as the risk surface grows. Pilots are free. Production is usage-based. Enterprise is negotiated.

§ 401
Tiers
Pilot · Production · Enterprise
Tier / 01

Pilot

30 days
$0

For teams evaluating a single product on a bounded workload.

Included
  • + 100k requests / mo
  • + One product of choice
  • + Cloud deployment
  • + Email support
  • + Full API access
  • + SDK for Python, Node, Go
Start pilot →
MOST TEAMS
Tier / 02

Production

Annual
Custom

Usage-based pricing. Volume discounts kick in at scale.

Everything in Pilot, plus
  • + Unlimited requests
  • + All three products
  • + Self-host or cloud
  • + Slack channel · 4hr SLA
  • + SOC 2 reports · MSA
  • + SSO (SAML / OIDC)
  • + Audit log export
  • + 99.9% uptime SLA
Talk to sales
Tier / 03

Enterprise

● RESTRICTED
Contact

Regulated, defense, and air-gapped environments. Negotiated terms.

Everything in Production, plus
  • + Air-gap · on-prem
  • + FedRAMP · CMMC align
  • + Dedicated SA
  • + Custom SLAs
  • + Managed red team program
  • + White-glove onboarding
  • + 24/7 incident response
  • + Single-tenant option
Request brief →
§ 402
Feature matrix
Capabilities by tier
                                    PILOT    PRODUCTION    ENTERPRISE
───────────────────────────────────────────────────────────────────────
request volume                      100k/mo   unlimited    unlimited
products included                   one       all three    all three
deployment — cloud                   ●             ●            ●
deployment — self-host (VPC)         ○             ●            ●
deployment — air-gap                 ○             ○            ●
policy rules                        standard  standard +    custom
                                              custom
adversarial test suite              OWASP     + Vern custom + classified
support channel                     email     slack 4h SLA  24/7 IR
SOC 2 / MSA documentation            ○             ●            ●
FedRAMP · CMMC alignment             ○             ○            ●
dedicated solutions architect        ○             ○            ●
single-tenant deployment             ○             ○            ●
───────────────────────────────────────────────────────────────────────
contract length                     30 days   12 mo · mo    custom
● available ○ not included — contact sales for details
§ 403
FAQ
Common commercial questions

Pricing questions, answered.

Talk to sales →
01

How is Production priced?

Production is usage-based on a per-request volume tier with a fixed monthly platform fee. Volume discounts apply at 10M, 50M, and 250M requests per month. Annual contracts get roughly 20% lower per-request rates versus month-to-month.
02

Can I upgrade from Pilot mid-contract?

Yes. Pilots are designed to convert. Once you're ready, we roll your workload to Production within a day. No migration. No re-integration.
03

What counts as a "request"?

A complete inspected transaction — one request and its associated response, including any streamed chunks. Internal tool calls spawned by a single user request don't multiply your bill. Blackbox runs and Ghostline token issuance are not counted.
04

Do you offer academic or non-profit discounts?

Yes. Academic researchers and accredited non-profits get 80% off Production pricing with a lightweight application. Contact us with details on your work.
05

How does Enterprise pricing work?

Enterprise is a fixed-fee annual contract with commitments on both sides. Pricing reflects the deployment complexity (air-gap, managed red team, single-tenant) and the SLA requirements. Contracts typically run 1–3 years.
06

Is there a free tier after the pilot?

No. Vern Labs is infrastructure your users depend on — we don't operate it as a side project. However our open-source tooling (vern/probes, vern/trace-cli) is and always will be free under MIT license.
— Next

Let's scope what this looks like for your team.

Filing
Founded
HQ
Issue
VL-LAB-500
2024
United States
17 APR 2026
§ 500
The lab

Built by engineers
who've shipped security at scale.

Vern Labs is a cybersecurity research and product company based in the United States. We were founded in 2024 by operators with backgrounds in federal cybersecurity, enterprise cloud security, and applied AI research. The mission is simple: build the security infrastructure that the next decade of software will actually depend on.

§ 501
Mission
Why we exist
01

AI systems are becoming the control surface for critical infrastructure. They are not yet secured the way control surfaces need to be.

Traditional security tools were designed for software that does what you tell it to. Modern AI systems reason, retrieve, call tools, and act autonomously — and the attack surface that creates is new, wide, and actively being exploited. We think the next decade of software will run on this substrate. Someone needs to build the security layer for it.

That's the work.

§ 502
Principles
How we work
01
Research drives product
Every product we ship starts as a research question. We publish what we learn — papers, benchmarks, and open primitives — because a stronger ecosystem makes stronger products.
02
Don't be the single point of failure
We design for degraded operation. If Vern Labs goes down, your AI doesn't. Policy evaluation has graceful fallback. Observability is local-first. You can pull the plug on our infrastructure and your systems keep running.
03
Data minimization is a feature
We don't want your prompts. We don't want your outputs. We want policy decisions and metadata that let us make the product better. That discipline shapes every design choice.
04
Write it down
Our work product is documents. Specs, threat models, architecture reviews, postmortems. We ship the writing alongside the software. If a customer asks how something works, we send them the doc.
05
Hard problems, measured answers
We don't make security claims we can't measure. Every assertion comes with a benchmark, a reproducible test, or a clear scope statement about what we haven't verified yet.
§ 503
Personnel
Founding team
SO

Sam Oyan

Co-founder · CEO

Cybersecurity at NASA, where he held TS/SCI clearance and worked on defensive systems for flight-critical and classified workloads. Previously at Raytheon, and a U.S. Army veteran. Serves on the Y Combinator board.

Sam started Vern Labs because he spent a decade watching defense and enterprise teams treat AI like just another API — when the actual threat model is closer to adding a new autonomous agent to an organization.

NASA
Raytheon
U.S. Army
YC Board
TS/SCI cleared
HR

H. Raef

Co-founder · CTO

Security engineering at Microsoft, Wiz, and Google. Has shipped cloud security platforms that protect tens of thousands of enterprise environments and internal production systems at hyperscale.

Joined Vern Labs to solve the problem the next decade of software will actually run on. Leads the research team and owns the architecture of the Vern control plane.

Microsoft
Wiz
Google
§ 504
Advisors
Technical & operational guidance

The people in the room when we make hard calls.

Advisor / 01
Security
Former CISO, Fortune 50
Advisor / 02
AI research
Principal researcher, major AI lab
Advisor / 03
Defense
Retired senior DoD official
Advisor / 04
Go-to-market
Former VP Sales, security unicorn
Named advisors disclosed to verified prospects under NDA
§ 505
Backed by
Investors & institutional alumni
— Built with alumni from
Y Combinator NASA Microsoft Wiz Raytheon Google
§ 506
Careers
Open positions

Come build this.

Small team, high stakes, serious work. We hire exclusively for calibration, taste, and raw technical ability. We pay top-of-market. We ship in writing.

What you get
  • + Top-decile comp, cash + equity
  • + Remote-first · quarterly offsites
  • + 4 weeks paid time off, mandatory
  • + $5k / yr learning & hardware
  • + Fully covered health & dental
Don't see a match? We always want to hear from exceptional people. Send a note →
§ 507
Facts
Company metadata
Founded
2024
Headquarters
United States
Team
14 people
Open roles
5
Status
Operational
— Next

Work on security that actually matters.

Filing
Response
Channel
Issue
VL-CT-600
< 4h · business
Encrypted in transit
17 APR 2026
§ 600
Direct line

Talk to Vern Labs.

A real person on our team reviews every inbound. Most messages get a response within four business hours. For urgent security matters, use the hotline below.

§ 601
Intake
Primary contact form — route to sales & engineering

What are you securing?

Short notes are fine — a sentence on what you're building and where you're stuck is enough to route you to the right person. We'll come back with a 20-minute slot and a tailored reading list before the call.

Typical response
< 4 business hours
First meeting
20 min · technical brief
Pilot start
Typically 5–10 days
NDA
Available on request
Direct intake · VL-CT-001
Encrypted in transit
SYSTEM ONLINE
Avg. response · < 4h
§ 602
Channels
Direct routes for specific inquiries

Skip the form. Go direct.

Use the right channel for your question and you'll get a faster, better answer.

Channel / 01

Sales & pilots

Pricing, procurement, pilots, reseller questions, volume deals.

sales@vernlabs.com
Channel / 02

Technical briefs

Architecture deep-dives, deployment planning, integration questions.

engineering@vernlabs.com
Channel / 03

Research

Paper collaborations, benchmark contributions, academic partnerships.

research@vernlabs.com
Channel / 04

Press & media

Interviews, quotes, briefings, company news.

press@vernlabs.com
§ 603
Hotline
Responsible disclosure & incident reporting
● RESTRICTED CHANNEL

Security issues.
Disclosed responsibly.

If you've identified a vulnerability in a Vern Labs product, deployment, or research artifact — we want to hear from you first, and we'll work with you to coordinate disclosure.

We operate a formal responsible disclosure program and publicly acknowledge researchers with permission. Critical reports get a response within 24 hours, any day of the week.

● HOTLINE · VL-SEC-001
24h · CRITICAL
Email
security@vernlabs.com
PGP fingerprint
4A7C 9F21 DE08 B114 E3A2
6E91 5CD4 8B37 1F80 2D6C
Expected response
  • · Critical — 24h, any day
  • · High — 2 business days
  • · Medium / Low — 5 business days
§ 604
FAQ
Common questions before you reach out

What to expect.

01

How fast will you actually respond?

The first response — acknowledging receipt and routing you to the right person — is almost always within 4 business hours. A substantive technical response typically follows within a business day. Security reports get priority routing.
02

Do I need to commit to anything to start a pilot?

No. Pilots are 30 days, free, and non-binding. Most pilots start within a week of the first call. If it doesn't fit, we part as friends — and you can keep using our open-source tooling regardless.
03

Can we sign an NDA first?

Absolutely. We have a standard mutual NDA we can send on request, or we'll happily counter-sign yours. Enterprise customers in regulated or classified environments often start here.
04

I'm a researcher. How do we collaborate?

Email research@vernlabs.com directly. We sponsor a small number of academic collaborations each year, contribute benchmarks to the community, and offer heavy academic discounts on Production-tier access.
05

Do you meet in person?

For enterprise and defense engagements, yes — our solutions team travels. For most other evaluations, we default to video calls. It's faster for you and saves your procurement team a calendar invite.
— Or, if you prefer

Book 20 minutes. Decide the rest on the call.