Methodology

How We Measure AI Memory

A semantic physics engine that reveals how AI models perceive and categorize brands.

Step 1

We Ask 11 AI Models

"What categories does this brand compete in?"

GPT-4o, Claude, Gemini, Perplexity, DeepSeek, and 6 more. Each has a different training corpus. Each sees the world differently. Each response is a data point.

Step 2

Consensus Forms Like Gravity

When 7+ models agree on a category, it becomes a "canonical doorway."

This is where customers find you. The more models agree, the stronger the gravity. The stronger the gravity, the more traffic flows through that door.

Step 3

Divergence Reveals Opportunity

When only 2-3 models see a category, that's a "niche doorway."

Less competition. Targeted reach. These are the keywords your competitors aren't bidding on. The categories where you can win with precision.

Step 4

We Watch the Field Every Week

AI perception isn't static. It drifts. We track it.

Every week, we re-query all 11 models for all 35,000 brands. You see when categories emerge, when consensus shifts, when new doorways open.

11

AI Models

35k+

Brands Tracked

Weekly

Collection

60%

Zero-Click Captured

Technical Details

Models Tracked

Mass Market: GPT-4o-mini, Gemini Flash, DeepSeek, Llama 3.1, Mistral. Premium: Claude 3.5 Sonnet, Cohere Command R+, Perplexity Sonar, xAI Grok, AI21 J2 Ultra.

Metrics

Citation Score (% of mentions), Ranking Position (1-15), Model Consensus (agreement count), Category Coverage (breadth), Kim Distance (semantic proximity to canonical ontology).

Avoiding Bias

Standardized prompts across all models. Canonical taxonomy for category normalization. Domain canonicalization for brand tracking. Multi-model triangulation.

Limitations

Model updates may cause non-market shifts. Prompt phrasing sensitivity. Category boundaries evolve. Weekly sampling may miss short-term changes.

See Your Brand's AI Memory

Enter any domain. See how AI categorizes it. Find your doorways.