Skip to main content
A pre-launch CRO method

Pre-traffic CRO: test the page before you spend on traffic

Pre-traffic CRO is conversion optimization done before a landing page receives visitors. Instead of waiting weeks for an A/B test to reach statistical significance, pre-traffic CRO simulates how 50 different visitor archetypes process the page and identifies the specific failure modes each one would exit on. The output is a ranked list of fixes you can apply before you buy your first click.

At 1,000 monthly visitors and a 2% conversion rate, one A/B test takes roughly 28 months to reach significance. Pre-traffic CRO sidesteps the traffic problem entirely.

Why pre-traffic CRO exists

Traditional CRO assumes you have meaningful traffic. The standard playbook is a heatmap, an A/B test, a survey, a session-recording review. Each of those is a measurement instrument. None of them work on a page that has not been launched yet, and most do not work well below 5,000 monthly visitors. Below that volume, A/B testing requires months per test to reach statistical significance, and most pages have more than one failure mode active at once.

Pre-traffic CRO is the diagnostic that fits the early stage of a page's life. It does not measure your conversion rate; it explains, archetype by archetype, which segments of your visitor mix are leaving and why. The fix list is generated before the first ad spend, before the first A/B test, and before the first organic visitor.

The complement is real: once a page has been pre-traffic-optimised and traffic flows in, A/B testing becomes the right next instrument. Pre-traffic CRO tells you what to test; A/B testing tells you which version wins.

How pre-traffic CRO works

A simulation engine generates 50 visitor archetypes, each carrying a calibrated cognitive profile: dual-process system 1 / system 2 weighting, regulatory focus (promotion vs prevention), advertising skepticism, persuasion knowledge, and a starting affect state. Each archetype processes the same page through their own lens and outputs a behavioral trace: where they engaged, where their attention faltered, and the specific reason they exited if they did.

Aggregate the 50 traces and the result is a Clarity Score (0-100), a per-archetype dropout map, and a ranked list of failure modes with concrete fixes. Each finding traces back to a specific page element and a specific reason. Never “the copy is weak” but “the headline does not answer the buyer-intent query in the first 50 milliseconds.”

The full methodology, including the dual-process model and the 200+ peer-reviewed papers the calibration draws from, is documented at /science.

When to reach for pre-traffic CRO

Four buyer profiles where pre-traffic CRO produces meaningfully different output than a manual audit or an A/B test.

Pre-launch validation

You have built a new landing page but have not driven traffic yet. Pre-traffic CRO finds the failure modes before your first ad campaign so the click-through rate is not buying you a broken page.

Agency client onboarding

An agency picking up a new account needs diagnosis in days, not weeks. Pre-traffic CRO produces the prioritized failure list before the kick-off call ends. No $2,000 manual audit cycle, no waiting on a heuristic review.

Low-traffic SaaS

Under 1,000 monthly visitors and a 2% conversion rate. A single A/B test takes 28 months to reach significance. Pre-traffic CRO sidesteps the traffic problem by simulating the visitor mix instead of measuring it.

Iteration faster than testing can keep up with

Product teams shipping landing-page variants every two weeks cannot wait for statistical significance on each one. Pre-traffic CRO produces directional diagnosis on every revision so the testing budget goes to the bets that already cleared a clarity bar.

The failure modes pre-traffic CRO surfaces

The diagnostic distinguishes between fundamentally different reasons visitors leave. Each failure mode requires a different fix. Most landing pages have more than one active at once.

Headline failures

First-impression visual judgment forms in 50 milliseconds (Lindgaard et al., Behaviour & Information Technology, 2006). If the hierarchy puts the headline in competition with imagery or navigation, the judgment is over before reading starts.

Trust gaps

Most landing pages are clear but not believable. Visitors understand the product. They do not believe the proof applies to them. Generic testimonials are decoration; specific named outcomes are the work.

CTA commitment mismatch

A visitor from cold Google search has been on the page for 30 seconds. Asking for signup at that moment is asking for commitment before any value has been delivered.

Segment fragmentation

A landing page conversion rate is an average across visitor types with different intent levels and decision stages. Optimising for one segment frequently degrades conversion for another. Diagnosing by segment before rewriting is the only way to know which fix moves the number.

For the full nine-failure-mode taxonomy and how to diagnose which one is yours, see Why Is My Landing Page Not Converting?

Test a page before you spend on traffic

Free first scan, no login, results in under 15 minutes. The Clarity Score, the per-archetype dropout map, and the ranked failure list arrive together.