Free Trial

Overview

Over the last year, we’ve been intensely focused inside ArcAI on one job: helping enterprise SEOs answer a deceptively simple question:

“Where is the real demand in AI Search?”

Today, we’re rolling out the latest version of our AI Search Demand Estimation model. It’s a more refined hybrid approach, built from hard-earned lessons, better calibration data, and a commitment to transparency that we think the entire industry needs right now.

But before I tell you what’s new, I want to walk you through how we got here, because the “how” is the part that builds trust.

Why Demand Matters (and Why It’s Suddenly Harder)

Search demand has always been one of the most valuable signals in SEO. It’s how you prioritize:

  • which topics to invest in
  • which content gaps are actually worth filling
  • what to defend vs. what to expand
  • where to place your bets when resources are limited

That part hasn’t changed.

What has changed is the shape of demand.

In AI Search, queries aren’t short head terms. They’re long, multi-part prompts. They can include context, constraints, follow-up questions, and even preferences.

So if you’re trying to size opportunity, you immediately hit two constraints that everyone in this space runs into:

  1. Prompts are longtail and messy, so you need a way to distill intent and roll it up into meaningful, aggregatable demand.
  2. AI engines don’t give you query volume, so you’re estimating without a native “GSC for ChatGPT/Perplexity/Gemini.”

And if you’re an enterprise SEO, you can’t work with “hand-wavy.” You need something you can defend.

 

Our First Approach: Clickstream at Massive Scale

We started where our data advantage is real.

We’ve been collecting clickstream data for years, and it includes 1B+ new user prompts / questions / keywords each month. That’s an enormous sample of how people search and what they ask.

Our first model leaned heavily into that dataset:

  • ingest the raw prompt stream
  • apply distillation methods to extract the core “topic intent”
  • map those intents into keywords/topics that can be aggregated
  • generate demand estimates by topic

This gave us something important early: directional signal. And it helped us build the foundation of ArcAI’s prompt research workflows.

 

The Hard Lesson: “Big Data” Can Still Be Wrong

In the first half of 2025, we learned fast that clickstream-only approaches break down when you try to serve enterprise needs at scale.

When we rolled this out across 3,500+ brands, we saw gaps that mattered:

  • country-level coverage issues
  • uneven representation across industries
  • missing nuance in specialized topic areas
  • weird outliers that didn’t pass the sniff test

If you’ve ever looked at a platform showing millions of searches for an obscure query, and then compared it to what Google reports, you know exactly what I mean.

This is the dirty secret of clickstream in general:

Clickstream can be huge and still be biased.

Bias in panel composition, device coverage, region coverage, vertical skew… all of it shows up eventually.

And if your job is to guide millions of dollars of content investment, “eventually” is too late.

So we did what we always do when the data doesn’t hold up:

We rebuilt the approach.

The Pivot: A Hybrid Model Grounded in a Simple Premise 

One insight changed everything for us:

Just because ChatGPT exists,
people’s problems haven’t changed.

The WHAT hasn’t changed. The HOW and WHERE have.

That premise became the anchor.

People still want the best running shoes, the cheapest flights, the right compliance framework, the top CRM-whatever your world is.

What’s changing is:

  • the interface (prompts vs. keywords)
  • the journey (follow-ups, deeper exploration)
  • the endpoints (AI answers vs. ten blue links)

So instead of trying to “invent” demand, we asked:

What if we use the most reliable base we have for intent demand, traditional search, and model the shift?

Google still represents the majority of global search activity, and it has something AI engines don’t provide: a mature demand baseline by topic.

So we moved to a hybrid approach:

  1. Use Google demand as the base signal for “what people want”
  2. Apply intelligent adjustments to estimate “how much of that is happening in AI Search”
  3. Use our prompt corpus to understand how AI queries cluster, roll up, and translate into topic demand

That’s the core model architecture that got us much closer to “enterprise-grade.”

The Breakthrough: Replacing Assumptions with Real Calibration Data

Then something big happened.

A National Bureau of Economic Research paper (“How people use ChatGPT”) introduced a level of behavioral grounding that simply wasn’t available before. In plain terms: it gave the market a better window into how AI tools are being used, not just that they’re being used.

For us, that mattered because it let us do what every model needs at some point: stop guessing and start calibrating.

So we plugged that data into our hybrid framework and updated the adjustment layers accordingly.

And because this entire space is full of black-box metrics, we made a decision that’s a little unusual: We shared the methodology publicly.

Not because it’s “nice.” But because enterprise SEO needs defensible numbers, and the industry needs standards.

What’s New in This Latest Release 

The version we’re rolling out now is the most refined iteration of that hybrid model.

Here’s what you should expect (and what we optimized for):

1. Better intent rollups from messy prompts

AI prompts don’t map cleanly to one keyword. This release improves how we distill prompts into:

  • core topic intent
  • essential modifiers (audience, constraints, location, timeframe)
  • aggregated “demand buckets” that actually match how enterprise teams plan content and measure opportunity

(If you’ve used our prompt research capabilities, this is the same philosophy: don’t chase every variation, own the underlying intent.)

2. More consistent behavior across countries and industries

The clickstream-only model struggled here. The hybrid model improves consistency by grounding estimates in a stable baseline and applying calibrated adjustments, so you don’t get wildly inflated demand in one market and undercounting in another.

3. A model you can explain to your stakeholders

If you can’t explain it, you can’t defend it.

This release is designed to be “boardroom explainable”:

  • what the baseline is
  • what gets adjusted
  • why the adjustments exist
  • what changed between versions

Why We’re Being So Transparent About This

Because “AI Search Volume” is about to become one of the most abused metrics in the industry.

It’s easy to publish a number. It’s much harder to publish a number you’d be willing to defend in front of:

  • your CMO
  • your analytics team
  • your data science org
  • your finance partners
  • your agency or procurement process

If AI Search is going to be a real channel (and it is), we need demand metrics that don’t feel like magic.

That’s also why Clarity ArcAI is built as an end-to-end system: prompts, visibility, optimization, performance, and measurement have to connect… otherwise “demand” becomes a vanity metric instead of a planning input.

What We Want From You

We’d love for you to try the updated estimates and tell us where they land for your reality:
  • Do the numbers match your intuition across top topics?
  • Are the rollups aligned with how you plan content clusters?
  • Are there industries, niches, or markets where you think we should pressure-test harder?

This model is live because we believe it’s materially better, but we also know the fastest way to improve it is to put it in the hands of enterprise teams who live in the nuance.

If you’re already using ArcAI’s prompt research and visibility tracking, this update should make prioritization sharper and the “why this topic” conversation a lot easier.

Looking for an AI search solution to help prioritize AI efforts? Schedule a demo of seoClarity's Clarity ArcAI where you'll see first-hand how our end-to-end AI solution is helping brands turn AI search into a real channel for their organizations. 

 

Mitul Gandhi - Author Snippet (1)About the Author: Mitul Gandhi

 As a longtime data-driven serial entrepreneur, information architect and SEO veteran, Mitul has developed a blend of vast technical expertise and intense marketing insight. His variety of experience, gained in positions in in-house SEO, search marketing, and software development, affords him the ability to efficiently assess how to use software tools to meet challenges and drive ROI. As the Co-Founder and Chief Architect of seoClarity, Mitul currently oversees day-to-day operations, and provides strategic direction to all departments. His well of knowledge includes 10+ years of consulting experience with Fortune 500 and top Internet retailers concerning online search marketing. He has several patents pending for analyzing cause and effect in SEO. Mitul holds an MBA in direct marketing from Rochester Institute of Technology. Additionally, he has spoken at conferences in the United States and the U.K., including SES, SMX and Pubcon. He has also been quoted in MSN Money, USA Today, Time Online, Search Engine Watch, Search Engine Land and Web Pro News. Connect with him on Twitter or LinkedIn.