We kicked off this study by analyzing 6 million URLs, and found that schema markup is much more common on pages cited by AI than pages that aren’t.
AI cited pages were almost three times more likely to have JSON-LD than non-cited pages.
That’s a big gap, and the kind of stat that gets shared in LinkedIn carousels and conference slides as proof that schema is an AI visibility lever.
But we weren’t satisfied with the data since it could easily have been correlation, not cause.
Schema markup tends to live on better-maintained, more technically sophisticated sites, and those same sites publish stronger content, build more authority, earn more links, and do all the other things that get pages cited.
Schema could be doing real work, but it could also just be riding the wave of every other signal.
So we couldn’t actually answer the question SEOs really care about: if I add schema to my page, will I get cited more by AI?
To find out, we ran a second study designed to isolate the effect of adding schema.
Here’s what we found.
Brand Radar, Xibeijia pulled a few million URLs cited in AI Overviews.
She then retrieved the HTML history from our crawler database, labeled whether each URL contained <script type="application/ld+json">, and spotted the date that schema presence transitioned from “False” to “True”.
This left her with 1,885 pages that introduced JSON-LD between August 2025 and March 2026.
Finally, to analyze all of that data, she used Agent A, our new AI marketing agent.
For each page Xibieijia knew two key dates:
The last day our crawler checked the page and found no JSON-LD
The first day our crawler detected JSON-LD on the page
The day a page added JSON-LD is its treatment date.
Sidenote.
“Treatment” is the standard term for the moment a change is applied to something we’re measuring.
Xibeijia measured how many times each page was cited by Google AIO, Google AI Mode, and ChatGPT in the 30 days before and 30 days after the treatment date.
The tricky part of any study like this is seeing past noisy data.
Citations across all of AI search were moving during this period; AI Overviews were contracting, AI Mode was exploding.
If Xibeija had just done a simple before-and-after comparison, it would have been measuring the platform trend, not the schema effect.
So for each treated URL she picked 3 control URLs (from different domains, with similar pre-period citation levels) that had never added JSON-LD.
Comparing two groups of pages that were getting cited at the same rate before—where the only main difference was that one group added schema—made it easier to isolate what schema actually did.
searchVIU answers a related question.
They tested whether five major AI systems (ChatGPT, Claude, Perplexity, Gemini, and Google AI Mode) actually used schema markup when fetching a page in real-time.
Spoiler: none of them did. During direct retrieval, every system extracted only visible HTML content. JSON-LD, hidden Microdata, and hidden RDFa were all ignored.
A few other points to flag, and some questions worth testing next:
Pages that add JSON-LD often change other things at the same time (e.g. links, content, technical fixes). We can’t fully separate schema from these kinds of co-occurrences.
We pooled all schema types together. Article, FAQ, Product, HowTo, Organization. It’s possible some types help more than others. This may be worth digging into.
We measured 30 days post-treatment. If JSON-LD has a slow-burn effect, a 60- or 90-day window might reveal more growth.
We studied JSON-LD—the most widely used schema format. Other formats exist (Microdata and RDFa), but we haven’t yet tested them.
We only looked at schema in the page’s HTML, not schema injected via JavaScript. AI crawlers appear to treat the two differently. ¹
The small AI Overview decline is real but unexplained. Treated pages dropped about 4.6% more than matched controls, and we don’t know why. A follow-up study could look at whether specific schema types or specific content types account for the gap.
Brand Radar can help when it comes to tracking the course of AI citations:
Pick 5–10 test pages where you plan to add JSON-LD. Ideally pages already getting some AI citations, so you have a baseline (pages with zero citations make it harder to tell whether schema did nothing, or whether the page just wasn’t going to get cited either way). You can check this in the Cited Pages report.
Pick 5–10 control pages with similar citation levels that you’re not adding schema to. This is what separates “schema did something” from “AI Overviews shifted for everyone that month.”
Record baseline citations for both groups across AI Overview, AI Mode, and ChatGPT in Brand Radar. Just apply URL filters to isolate those citation numbers.
Add schema to your test pages and note the date. Don’t change anything else on those pages during the test window.
Compare both groups after 30 days (or longer if you can). The question is: “did treated pages go up more than control pages did?”
If both groups moved by similar amounts, that’s more to do with the platform trend than the schema.
But if treated pages outperformed controls, that’s a sign schema is having a positive impact on citations.
If you run this on your own pages and get a different result to ours, let us know.
build links, maintain their pages, and rank well in regular search.
AI systems are more likely to retrieve this kind of content, so cited pages over-index on all of those signals at once. Strip schema out and it’s very likely the rest of the signals still carry the page through to citation.
If you’re already doing the rest of the SEO work well, JSON-LD isn’t going to be the unlock. And if you’re not, schema by itself probably won’t make up for that.
There are still, of course, many good reasons to use JSON-LD schema (rich results, voice assistants, knowledge graphs, downstream entity recognition).
But if the only reason you’re adding it is to get more AI citations on pages that are already visible, our data doesn’t support that bet.