OEM Lens · 2026-W19 · 5 featured categories

AI is becoming the new category layer.

Each category settles where its AI vocabulary actually lives — pulled toward OEM, distributor, aggregator, or buyer-search language.

↓ Click any category to drill into its AI memory verdict.live field · 60fps · 5 featured
↑ Distributor
↓ Buyer-search
← OEM
Aggregator →
HEDGEHOLDSEOARB
click a category to drill →
Honest disclosure · 2026-W19

We detected 27 HEDGE conditions, 1 SEO play, and 0 ARBITRAGE conditions in 2026-W19.

That is the point: we do not manufacture opportunity when the evidence does not support it. An additional 79 categories are disclosed as INSUFFICIENT_DATA — panel composition too thin to classify cleanly. We surface them rather than hide them.

Where AI's vocabulary is anchored · 253 categories · 2026-W19

● weekly ·

AI is the X-axis. We measure where each category's AI vocabulary lands in a reference frame spanned by OEM · DIST · AGG · SEARCH. Crossed with manufacturing-risk band, each slug lands in one of four quadrants. The verb in each card is your OEM-specific action — drill any card for the evidence.

OEM-lens verb legend · what each verdict means
INVEST_IN_BRAND_VOCAB
AI lost the category entirely · brand-vocabulary investment required.
RECLAIM
AI hears buyers, not you · invest in your brand-side category language.
CAPTURE
AI hears the channel, not you · reclaim category vocabulary.
DEFEND
AI hears you correctly · keep marketing investment intact.
INSUFFICIENT_DATA
panel too thin to classify · cohort substrate must expand.
▸ Methodology & honest caveats · what this measurement is and isn't
AI is the X-axis. Other channels are reference frames. We don't predict — we observe what AI is doing and which channel shaped it. Anyone can compute embedding distances; only we have the cohort substrate to attribute AI's vocabulary to a specific reference frame across 109+ categories.
  • Centroid is an average across all AI/OEM/DIST/AGG forms in our W19 cohort observation; smooths over within-frame variance. Cohort disagreement score reported separately to surface AI internal fragmentation.
  • SEARCH centroid uses top-10 buyer keywords per slug from DataForSEO; short-form text systematically embeds at lower cosine to long-form OEM text, hence quantile-normalized distances reported alongside raw cosines.
  • Anchoring thresholds (T_ANCHORED=0.60, T_FRAGMENTED=0.45) are heuristic and uncalibrated — should be re-tuned with held-out validation set post-demo.
  • Slugs with missing vantages (e.g. no DIST coverage in cohort) are flagged data_completeness_flag.complete=false and the missing frames listed; they cannot be cleanly classified.
  • Quadrant assignment uses scorecard MFG-RISK band as orthogonal axis; scorecard composite weights are heuristic (per scorecard _meta).
  • Brand-level drill-down (per-brand AI-vs-frame attribution) deferred to v2 of this artifact — would power /lens/oem customer-specific 'you are anchored to DIST' callouts.
  • This is a DESCRIPTIVE per-slug attribution. Not causal. Useful for identifying ARBITRAGE candidates (high-risk + fragmented), not for forecasting.
  • top_search_keywords coverage is partial: 112 of 253 slugs (44%) have a DataForSEO Keyword Planner pull. The remaining slugs have no top_search_keywords field — this is absent-not-zero. Cost to fully populate: ~$7 (141 remaining slugs × $0.05).
week 2026-W19 · model · 253 slugs · full methodology
Same substrate, different operating brain

One measurement → four lenses.

What is HEDGE for a broker is CAPTURE for the OEM. The same quadrant verdict drives different action verbs depending on whose book you're reading from.