Methodology
A semantic physics engine that reveals how AI models perceive and categorize brands.
"What categories does this brand compete in?"
GPT-4o, Claude, Gemini, Perplexity, DeepSeek, and 6 more. Each has a different training corpus. Each sees the world differently. Each response is a data point.
When 7+ models agree on a category, it becomes a "canonical doorway."
This is where customers find you. The more models agree, the stronger the gravity. The stronger the gravity, the more traffic flows through that door.
When only 2-3 models see a category, that's a "niche doorway."
Less competition. Targeted reach. These are the keywords your competitors aren't bidding on. The categories where you can win with precision.
AI perception isn't static. It drifts. We track it.
Every week, we re-query all 11 models for all 35,000 brands. You see when categories emerge, when consensus shifts, when new doorways open.
11
AI Models
35k+
Brands Tracked
Weekly
Collection
60%
Zero-Click Captured
Mass Market: GPT-4o-mini, Gemini Flash, DeepSeek, Llama 3.1, Mistral. Premium: Claude 3.5 Sonnet, Cohere Command R+, Perplexity Sonar, xAI Grok, AI21 J2 Ultra.
Citation Score (% of mentions), Ranking Position (1-15), Model Consensus (agreement count), Category Coverage (breadth), Kim Distance (semantic proximity to canonical ontology).
Standardized prompts across all models. Canonical taxonomy for category normalization. Domain canonicalization for brand tracking. Multi-model triangulation.
Model updates may cause non-market shifts. Prompt phrasing sensitivity. Category boundaries evolve. Weekly sampling may miss short-term changes.
Enter any domain. See how AI categorizes it. Find your doorways.