Prerender https://prerender.io Prerender. JavaScript SEO, solved with Dynamic Rendering Fri, 13 Mar 2026 08:51:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://prerender.io/wp-content/uploads/favicon-150x150.png Prerender https://prerender.io 32 32 “The Golden Era of Content is Over” – A Conversation with Ryan Law from Ahrefs https://prerender.io/blog/a-podcast-conversation-with-ryan-law-ahrefs/ Wed, 11 Mar 2026 08:48:20 +0000 https://prerender.io/?p=7721 Ryan Law is the Director of Content Marketing at Ahrefs, one of the most widely read SEO blogs in the industry. It’s one of the clearest examples of a content empire built on the old model that really worked: high-volume, targeted keywords, and perfect on-page optimization.

But now, Ryan is saying this model no longer works. 

In this episode of the Get Discovered podcast, Joe Walsh sits down with Ryan to talk about what’s breaking for content teams because of AI, why the attribution metrics everyone’s been reporting on are the wrong ones (and his unique take on this), and what content marketing leaders should refocus on instead. 

Watch the full episode below, or keep reading for a summary of the conversation.

AI Overviews Are Killing Clicks, Even If You Rank First

If you’ve sensed that your content is working harder for less return, the numbers back you up.

Ahrefs’ own research shows that AI Overviews are intercepting clicks that previously went to ranked pages. The search position hasn’t changed, but now, people aren’t clicking through. 

What’s actually disappearing is the URL click itself. Instead, AI tools are synthesizing the content and answering the question, but aren’t sending anyone to the source directly. You may still influence a purchase through AI tools, Ryan says, and people are likely buying heavily (himself included). But measuring any of these conversions through direct clicks, as we did before, is increasingly impossible.

Note: despite this, your rankings in search engines are still important. LLMs are increasingly pulling from the top five positions in SERPs.

The Content That’s Losing the Most Traffic to AI

Ryan is unusually well-positioned to understand who suffers most from this shift because Ahrefs is the archetype.

“Companies that are hardest hit are those that have historically had a ton of search traffic from very informational, top-of-funnel queries.”

As one of the top industry leaders, Ahrefs has published thousands of pages covering the full landscape of SEO education with near-perfect search optimization. Yet, despite doing everything right the old way, they’re seeing clicks go down every month. If that’s the situation at one of the most technically capable content operations in the industry, it’s an environmental shift for everyone else, too.

Key takeaways for content leaders: the volume-and-optimization playbook that drove traffic growth for the better part of a decade has fundamentally changed its unit economics. 

How Ahrefs Is Adapting Its Content Strategy

Ahrefs isn’t abandoning content by any means. But Ryan is shifting the company’s content marketing strategy. 

Here’s what he’s doing instead:

1. Research over volume

Ahrefs now has an in-house, full-time data scientist on the marketing team. The focus is on original data and proprietary insights that can’t be synthesized by AI because they didn’t exist anywhere before Ahrefs published them. As we’re hearing throughout the podcast from guests like Noah Greenberg, Alain Schlesser, and more, original research is the content that content marketers should double down on—rather than targeting keywords. Original data is the content being cited by LLMs, media, and other blogs, as it earns presence in a way that a well-optimized explainer article no longer reliably does.

Further reading: what 100M+ pages reveal about how AI systems choose your content.

2. Distribution beyond owned channels

Another key theme we’re seeing in the podcast: third-party presence over owned pages. Channels like guest posts, partnerships, review sites, creator collaborations, and LinkedIn are where Ahrefs’ team is seeing surprising traction. The logic is that if an AI search tool synthesizes content from across the web, being on your website is less valuable than being on many websites. Third-party presence is a must-have to prioritize, even over your own pages. 

Why Click Attribution Is the Wrong Metric for AI-Era Content

Another aspect that Ryan focuses on is his unique approach to content attribution… not worrying too much about it. While this approach isn’t new for him, it’s an increasingly common take from others in the industry—and it’s aging surprisingly well in the AI era.

Ahrefs doesn’t obsess over attributing specific customers to specific content. They use self-reported attribution (i.e., asking new signups how they found out about the product) and a few directional heuristics. Ryan frames this as a feature rather than a limitation.

In his opinion, there’s an inverse relationship between how easily a marketing tactic can be attributed and how long it remains useful.

“If it’s very easy to say, yes, this is definitely working, then in short order, everyone is going to do that.”

The channels with the longest shelf life, like sponsorships, events, thought leadership, and brand, are the ones that require faith. You believe they work because the logic holds, not because the dashboard proves it. This has always been true. What’s different now is that the tactics that were easy to attribute (organic search clicks) have now become harder to attribute, which is forcing a reckoning on teams that built their reporting around traffic metrics.

And as Alain Schlesser from Yoast discussed in his episode, the fixation on clicks was always a proxy metric. The business goal was never actually to get clicks—it was to drive consideration, intent, and purchase. Perhaps AI has just broken the shortcut measurement of this that has existed all along.

Yes, AI Search Still Runs on SEO Fundamentals

One of the clearest points Ryan makes is about how companies are misframing their AI search strategy. Most businesses are thinking about AI SEO (or GEO, or AEO—whichever term you prefer) as an entirely separate channel. And while there are aspects of AI search that are certainly different, Ryan argues that this is the wrong approach. At the end of the day, it’s still very much SEO.

“I think people are downplaying the impact of traditional search indices on AI search in a very, very big way.”

When someone runs a query in ChatGPT or Perplexity and gets a web-sourced answer, that answer comes from a search index. The LLM is using RAG (Retrieval-Augmented Generation) to pull real content from the web. The infrastructure that makes AI answers possible is the same infrastructure traditional SEO has always worked with, and most SEOs speak openly about this, like Peter Rota on the podcast.

But this means that, despite Ryan’s shifting strategy, crucial SEO fundamentals like building topical authority, creating original content, earning high-quality backlinks, and distributing content across reputable third-party sites matter for AI search, too, of course. The tactics that sound newest (LLMs.txt, content chunking, entity optimization) are real, but they’re a second-order concern compared to the fundamentals.

This is also why certain technical SEO tools matter more, not less, in this environment. If AI crawlers can’t render your JavaScript-heavy site, they can’t index the content. And content that can’t be indexed can’t be cited. Making your content visible to AI crawlers is the baseline. 

The Next 12-18 Months: Why AI Spam Will Be Harder to Filter Out

In every episode, we ask guests for their prediction of what’s to come over the next year or so.

In Ryan’s case, he’s worried about content volume. And specifically, he says that the scale of AI-generated content is already high quality, and it will only get better. His recent Claude Code experiments left him convinced that automating good, research-backed content is here to stay.

What does this mean for content marketers? A coming flood of content that sounds (even more) human, synthesizes ideas coherently, and occupies the same search positions and AI citation pools that hand-crafted content is competing for. 

The people and teams who will navigate this well are those who hold something that can’t be automated: genuine original data, real institutional knowledge, earned perspectives, and the distribution networks to get those things seen.

What He’s Optimistic About

Despite everything, Ryan’s genuinely enthusiastic about what’s on the other side of this transition.

For him, the most exciting change is what AI enables for individuals. He’d wanted to learn to code for ten years and couldn’t break through the learning curve. Over the past few weeks, he rebuilt his personal website from scratch—the first time he’s ever built exactly what he wanted without being constrained by a CMS.

“I suddenly feel like I have superpowers.”

This new ability to build things you couldn’t before, learn faster, and move with more autonomy is real for content marketers, too. The people who will win are those who stop mourning the playbook and start experimenting with what the new environment makes possible.

About Ryan Law

Ryan Law is the Director of Content Marketing at Ahrefs, one of the most widely read SEO and content marketing blogs in the industry. Previously CMO at Animalz, he grew the blog to over 1 million page views on the strength of opinionated, perspective-driven content. He oversees Ahrefs’ blog, research, and a newsletter with 284,000 subscribers, and published the data showing AI overviews reduce clicks to top-ranking pages by 58%. Connect with Ryan on LinkedIn or listen to the full conversation.

About Prerender.io

Prerender.io is a leading SEO solution that helps modern websites ensure their JavaScript-heavy pages are fully visible to search engines and AI tools. Trusted by companies like Microsoft, Salesforce, and Walmart, Prerender is the go-to partner for businesses navigating the future of SEO and AI-driven discoverability. Start for free today.

]]>
Industry Study: What 100M+ Pages Reveal About How AI Chooses Your Content https://prerender.io/blog/industry-study-how-ai-retrieves-content/ Wed, 04 Mar 2026 10:08:59 +0000 https://prerender.io/?p=7545 The brands winning in AI search aren’t always the ones with the best content. They’re the ones AI systems can actually read and trust.

That’s the central finding from a study conducted by Prerender.io and OtterlyAI analyzing 100M+ pages across eight global enterprise brands. This article breaks down what the data shows, helping you pinpoint what content AI systems prefer, seven key findings to include in your content strategy, and the three structural changes that are moving the needle on AI search visibility.

The Dataset: Eight Global Brands and 100M+ Pages

To conduct this study, the Prerender.io team analyzed 100M+ pages across eight anonymized brands in Prerender.io’s database, pulling the URLs most frequently requested by ChatGPT over Q4 2025. These are real machine-to-site retrieval requests, as opposed to pageviews or rankings, which means every signal in this dataset reflects exactly what AI systems are actively looking for.

The brands were selected to represent a range of industries, company sizes, and geographies. Each brand averages a minimum of 75M+ page renders per year, ensuring the dataset is large enough to surface reliable patterns. The brands chosen span ecommerce, SaaS, automotive, sports, fashion, and government to evaluate a variety of industries.

BrandIndustryMost-Requested Page Types
Brand 1Multinational automotive company based in EuropeInformational blog posts, homepage
Brand 2Leading jewelry brand headquartered in the US (ecommerce)Homepage, FAQs, comparison guides, product blogs
Brand 3International sports organizationStreaming pages, live event pages
Brand 4Global athletic leisure brand in 50+ countries (ecommerce)New collection pages, homepage, FAQs, discount pages
Brand 5American department store (ecommerce)Top-selling product pages, homepage
Brand 6 Global API/SaaS platform headquartered in the USTechnical documentation (exclusively)
Brand 7Government / legal entity based in EuropeDocumentation, homepage
Brand 8Global fashion retailer headquartered in Europe (ecommerce)Homepages (US + international), store locator, product pages

The OtterlyAI team then supplemented this retrieval analysis with citation data, examining how these brands actually appear in AI-generated answers across ChatGPT, Perplexity, and Google AI Overviews. Seven findings emerged. Here’s what they show.

Key Finding #1: AI Systems Reward Easily Answerable Pages

This is the clearest finding from the study across each brand. The most-retrieved URLs across all eight brands fall into a surprisingly narrow set of content types:

  • Evergreen editorial content and buying guides
  • FAQs and comparison pages
  • Technical documentation and API references
  • Category and collection pages

And these pages share a common characteristic. They answer a specific “what,” “how,” or “why” question without requiring the reader, or the AI crawler, to navigate anywhere else.

This is a significant reframe for marketing teams accustomed to thinking about content in terms of the funnel or site hierarchy. AI visibility is not about where a page sits in your navigation. Instead, it’s about whether the page can reliably serve as an answer on its own. Whether it’s a blog article answering a specific question, a comparison page performing a this vs. that analysis, or a product category page, the pages that AI systems return to repeatedly are the ones that deliver complete and clearly structured answers. This also supports why AI systems seem to prefer knowledge base and help center content: they typically answer a single question.

Key takeaway: write your content in a clear, question-answer format, focus on optimizing your knowledge base documentation, and ensure pages answer a specific “what,” “how,” or “why” question. 

Key Finding #2: Address Questions Directly, Especially for Ecommerce Brands

This analysis also surfaced differences in how AI crawlers and humans navigate pages—especially for ecommerce. For example, when humans shop, they browse product detail pages in detail. But when AI systems research products on ecommerce sites, that’s not always the case.

Instead, AI systems consistently prefer:

  • Category and collection pages to answer “what are the best options?”
  • FAQ and sizing guides to answer “what should I get?”
  • Store locators and availability pages to answer “where can I buy this?”

Individual product pages still matter, but far less than traditional SEO priorities suggest. The AI-driven shopping journey starts earlier and at a higher level of abstraction. Users receive AI-generated summaries before they ever visit a brand’s site, and those summaries are built from category-level and editorial content, not PDPs.

This means the pages most critical to AI visibility are often the ones least optimized for it. Category pages, FAQs, and buying guides tend to be JavaScript-heavy, dynamically loaded, and lower on the priority list for technical maintenance, precisely where AI systems are most likely to encounter incomplete or missing content.

Key takeaway: prioritize your category pages, FAQs, and buying guides for both content completeness and technical rendering. These are the pages AI systems are currently using to build shopping summaries. Focus your efforts here, rather than on PDPs alone.

Further reading: AI Indexing Benchmark Report for Ecommerce

Key Finding #3: Third-Party Sites Matter For Citations 

This is the finding that should concern brand and growth leaders most.

In OtterlyAI’s analysis of one of the world’s top athletic brands, OtterlyAI looked at over 1,300 citations across 25 test queries. Here’s the breakdown of the share of citations per source:

  • Independent review sites: 8.6%
  • Reddit: 7.2%
  • YouTube: 4.6%
  • Brand’s own domain: 4.2%

Evidently, the brand’s own domain holds less weight than community platforms like G2, Reddit, or YouTube. And this isn’t an isolated case. Broader research from OtterlyAI across 1M+ citations across ChatGPT, Perplexity, and Google AI Overviews confirms the same pattern across industries: community platforms and third-party sources capture the majority of AI citations, regardless of how strong the brand’s owned content is.

AI systems are designed this way. They increasingly want to sound confident in their answers, and often prioritize third-party validation and community consensus over brand-owned marketing content. That’s a newer structural difference with LLMs, and your strategy has to account for this.

Key takeaway: Build a deliberate third-party presence strategy. Identify the review sites, forums, and media outlets AI systems are already citing in your category, and prioritize earning coverage there.

Key Finding #4: Technical Accessibility Is Crucial for AI Visibility

Across this dataset, one issue emerged as the most consistent barrier to AI visibility. Brands were inadvertently blocking AI crawlers with configurations originally written for traditional search engines.

When AI crawlers can’t access your site, you’re not partially visible to AI systems—you’re completely invisible. This means no citations, no mentions, and no influence on AI-generated answers. This issue is more common now because AI crawlers are newer and less familiar to infrastructure teams than Googlebot, and they’re frequently caught by security policies that predate their existence.

Before any content or strategy work, confirm that your site is actually accessible to major AI crawlers. It’s the single highest-leverage action available to brands that have invested in content and are seeing no AI visibility, and it requires a conversation with your technical team, not a content brief.

For JavaScript-heavy sites specifically, the gap between what a human sees and what an AI crawler can read is often significant. Server-side rendering or prerendering (serving AI crawlers a fully rendered, stable version of each page) is often the most reliable fix.

Key takeaway: have your technical SEO team evaluate your technical performance, particularly your JavaScript rendering and robots.txt.

Further reading: How to Conduct a GEO Audit on your Site

Key Finding #5: Homepages Are Gateways, Not Destinations

Homepages appear frequently among the top-requested URLs, and sometimes as the single most-retrieved page. But the retrieval pattern tells a specific story.

AI systems hit the homepage and then fan out immediately. The homepage appears to serve a defined function to confirm brand identity, establish authority, and act as a starting point for deeper retrieval. However, the sharp drop-off in retrieval volume after the homepage means that what happens next determines the majority of your AI visibility. A strong homepage paired with inaccessible supporting content is nearly as limiting as no homepage presence at all.

Key takeaway: treat your homepage as a crawl entry point, not a destination. The real AI visibility work happens on the supporting pages it leads to.

Key Finding #6: You Need a Different AI Search Strategy for Each AI Platform

Not all AI platforms behave the same way, and this has direct implications for where marketing teams focus their energy. 

Proprietary data from OtterlyAI shows the following:

AI PlatformCitation Rate
Perplexity97% of responses include a citation
Google AI Overviews34% of responses include a citation 
ChatGPT16% of responses include a citation 

Perplexity has a near-universal citation rate, meaning that most answers include a citation, and that you have a higher chance of being cited in their response. This implies that, for a Perplexity-focused strategy, if you create comprehensive, well-structured content across all detail pages, it has a real chance of appearing. 

However, this is not equally the same for other AI platforms: Google’s AI Overviews has a 34% citation rate, whereas ChatGPT only has 16%. This means that you may need to adapt your AI search strategy accordingly. For AI Overviews, focus on cornerstone content and domain authority. While for ChatGPT, prioritize page speed—it doesn’t like waiting around for slow-loading pages. 

A single “AI search strategy” will underperform, and it may need to change every few months. A brand might have strong Perplexity visibility today and near-zero ChatGPT presence tomorrow—not because of content differences, but because each platform has different technical and editorial preferences.

Key takeaway: monitor AI visibility by platform, not in aggregate. A blended view of “AI traffic” will mask where you’re winning and where you’re invisible. You can use AI search analytics tools to do this.

Key Finding #7: International Brands Can Experience Greater Visibility Issues

For brands operating across markets, AI systems don’t default to your primary locale. They actively retrieve language-specific URLs, country variants, and localized pages, and appear to match content language directly to the user’s query language.

A French user asking about your product in French will receive an answer drawn from your French-language content, not an English page. If your localized pages are incomplete, slow to load, or invisible to AI crawlers, that market segment is effectively unserved by AI search—regardless of how strong your primary-market presence is.

Key takeaway: run an AI crawler accessibility check on your localized pages. Incomplete or slow-loading country variants may be entirely invisible to AI systems.

What Does This Mean for Revenue? A Note on Attribution

The reality is that clean attribution between AI visibility and revenue doesn’t quite exist yet. For now, AI systems remain a black box for most analytics stacks, and click-through rates on AI answer engines are significantly lower than traditional search.

That said, the directional evidence is building. Bain & Company’s February 2025 research found that 80% of consumers now rely on AI-written results for at least 40% of their searches. A separate Bain report from November 2025 found that 30–45% of US consumers already use AI specifically for product research and comparison, and that AI now accounts for up to 25% of referral traffic for some retailers. The research and purchase consideration stages are already happening inside AI platforms before buyers ever reach a brand’s site.

And in this specific dataset, the directional data continues: one ecommerce brand saw AI crawler requests more than double from Q1 to Q3 2025. Over the same period, publicly reported earnings increased by 109.7%, accounting for nearly $80M in additional revenue. We’re not claiming causation, but the parallel is consistent with a model where AI visibility amplifies demand rather than directly creating it.

Data source: Prerender.io dashboard of a key enterprise client. 

The more useful frame: think of AI visibility as revenue protection and a growing acquisition channel, even if it’s not entirely attributable yet. If AI systems retrieve your content incorrectly, partially, or not at all, your brand is excluded from the consideration set before the funnel begins. Buyers arrive at competitors already shaped by AI summaries your brand had no part in.

A Measurement Framework

Given the current attribution gap, here’s a framework for how you can track AI visibility:

1. Brand share of voice across AI platforms.

Track how often your brand is mentioned across ChatGPT, Perplexity, Gemini, and Copilot relative to competitors. This upstream signal predicts downstream revenue influence before it shows up in your analytics. OtterlyAI is the most practical tool we’ve seen for making this measurable.

2. Citation monitoring on owned pages.

This tells you whether your content is being accurately represented. Being cited with incorrect pricing or outdated product details is actively damaging at the moment decisions are being formed. Use Screaming Frog to surface a full URL list, then audit your presence on third-party platforms like G2, Capterra, and Trustpilot.

3. AI crawler behavior.

Measuring AI crawler activity shows whether your site is being actively retrieved by AI systems at all. Growing retrieval volume—like the doubling observed in our ecommerce case—is a directional signal worth tracking alongside business outcomes. You can use a solution like Prerender.io to provide visibility into your AI bot behavior, particularly in comparison to search or social crawlers, and identify whether pages are being served to AI crawlers in a fully rendered state.

Prerender.io dashboard of an anonymized client

4. Branded and direct traffic.

This captures the indirect effect. Users who encounter your brand in AI-generated answers often don’t click through immediately—they find you later via direct or branded search. Correlating share of voice trends against branded traffic growth gives you a reasonable proxy for AI-driven influence, even without direct attribution.

Summing Up: Three Things That Impact Your Presence in AI Search 

1. Technical retrievability is a must-have, not a nice-to-have.

AI systems can only work with content they can access and read completely. For JavaScript-heavy sites, serving a fully rendered, stable version of each page to AI crawlers isn’t optional. It’s table stakes for everything else.

2. Content needs to answer, not just attract.

The pages AI systems return to are self-contained, clearly structured, and genuinely useful. If a page can’t serve as a complete answer to a specific question on its own, it’s a weak candidate for AI retrieval.

3. Third-party presence isn’t optional.

The majority of AI citations come from outside your owned properties. Brands that earn coverage in the review sites, community platforms, and media that AI systems trust will win disproportionate attribution—regardless of how strong their own content is. Influence without attribution still influences. But it’s more valuable when it’s yours. Ensure that you are prioritizing your third-party presence as a core foundation of your marketing strategy.

FAQs

What types of pages does ChatGPT retrieve most often?

Self-contained pages that answer a specific question without requiring additional navigation: FAQs, buying guides, technical documentation, category pages, and evergreen editorial content.

Why is Reddit cited more than brand websites in AI search?

AI systems are built to prioritize third-party validation and community consensus. Reddit provides peer-generated, question-and-answer content that maps directly onto how AI systems construct responses.

Does page speed affect AI search visibility?

Yes, particularly for ChatGPT. Faster-loading pages are more likely to be included when an AI system has limited time to retrieve and render content.

What’s the biggest technical barrier to AI visibility?

There are a few: bot-blocking configurations and JavaScript rendering. Many enterprise sites inadvertently block AI crawlers with security rules originally written for traditional search engines, resulting in complete invisibility. JavaScript-heavy sites also block AI crawlers from accessing your pages in the first place. Server-side rendering or prerendering with Prerender.io can be a solution here.

How do I track whether AI systems are citing my brand?

Monitor brand share of voice using an AI search analytics tool like OtterlyAI. For citation accuracy, combine a Screaming Frog URL crawl with manual audits of third-party review platforms.

Does strong SEO automatically translate to strong AI search visibility?

In many ways, yes, but this isn’t guaranteed. The brands winning in AI search aren’t always the ones with the strongest content, which is a core ranking factor for traditional SEO. They’re the ones whose pages are technically accessible, structured to serve as direct answers, and with a strong third-party presence.

]]>
Why “Safe” SEO Is No Longer Enough: How Yoast is Rethinking SEO in the Age of AI https://prerender.io/blog/why-safe-seo-is-no-longer-enough-yoast/ Wed, 04 Mar 2026 08:51:43 +0000 https://prerender.io/?p=7708 When the company behind the world’s most widely used SEO plugin fundamentally changes what it’s optimizing for, that’s worth paying attention to. In this episode of Get Discovered, host Joe Walsh sat down with Alain Schlesser, Principal Architect at Yoast—the plugin powering over 13 million WordPress websites—to unpack what Yoast’s strategic pivot toward AI visibility tracking reveals about where SEO is heading.

Watch the full episode below or read on for key takeaways.

How AI Has Become a Gatekeeper Between Users and Search Engines

Schlesser begins the conversation by explaining that AI is a new layer we have to think about.

When a user types a question into Perplexity or ChatGPT, they’re not submitting a search query the way they used to. They’re asking a large language model a question. If that LLM determines the question requires grounding in real-world data, it then generates its own search queries and runs them against whatever search engine it’s connected to, whether that’s Google or a proprietary index, all of which is entirely behind the scenes.

“It’s not enough to appear in the search engines anymore,” Schlesser explained. “You also need to go through the filter that is the AI system and be relevant for the AI system.”

This has two major implications. First, brands now need to optimize for a layer they can’t directly observe or understand. Second, the queries themselves have changed: they’re not human-written keyword strings anymore. Instead, understanding what queries an LLM generates on a user’s behalf requires a fundamentally different kind of visibility.

Further reading: Industry Study: What 100M+ Pages Reveal About How AI Chooses Your Content

Why AI Search Fragmentation Has Made SEO Harder Than the Google Era

For years, SEO practitioners had an unusual luxury: a near-monopoly made their jobs simpler. Google was one primary search engine to optimize for with one set of rules and one analytics framework.

“While everyone cursed the fact that this was a monopoly, it actually made it very straightforward to optimize for,” Schlesser noted.

But things are different now, and unfortunately, that clarity is gone. Every AI answer engine now uses a different combination of search backends, and each is powered by an LLM trained on different data. A brand’s visibility can vary dramatically across ChatGPT, Perplexity, and Gemini. This isn’t because of anything the brand did differently, but how those underlying systems were built.

LLMs Don’t Scroll to Page Two: Why Being in the Top Five Rankings Means Everything

Another important detail that Schlesser discusses is the importance of being in the top-5 rankings. That’s because AI systems don’t paginate.

“When an LLM runs a search query to gather grounding data, it doesn’t scroll through results the way a human might. It takes the top three to five results… and nothing else,” says Schlesser. “Either you appear in the first few spots, or you don’t exist.”

This means the classic concept of “above the fold” on a SERP isn’t just about user click behavior anymore. Instead, it’s baked into how AI systems process information at a technical level. Results from page two and onwards are now invisible to LLMs. 

Rather than diminishing the importance of traditional SEO, this actually raises the stakes. Technical SEO fundamentals like indexability, crawlability, site speed, and structured content matter more now. The threshold for existing in AI search has gotten higher, not lower.

The LLM Training Data Problem: Why Being “Average” Makes You Invisible

Beyond the search layer, there’s a deeper challenge in how LLMs represent brands in their training data. This has to do with statistical compression, says Schlesser.

When an LLM is trained, he explains, it doesn’t retain every data point it encounters. It compresses its understanding into a statistical model that preserves strong signals and discards noise. In practice, this means the model keeps outliers—the cheapest brand, the most premium brand, or the most sustainability-focused brand—and a handful of examples that represent a meaningful average. Everything else gets compressed away.

“If you just go with the defaults because it’s safe, it will actually make it very likely for you to never appear in the consciousness of those LLM systems,” Schlesser said. “Safe will not be enough.”

In traditional search, a strategy of being safe, consistent, and broadly present could accumulate enough volume to be effective. In an AI-driven world, that strategy actively works against you. Brands that don’t stand for something specific, such as original data or a clear, distinct angle or perspective, risk being statistically invisible in the training data that powers the systems where discovery increasingly happens.

The practical upshot: brands need a genuine point of view. Not just good content, but distinctive content that positions them at one end of a meaningful spectrum.

Should You Block AI Crawlers? Why That Strategy Is Likely to Backfire

Some companies have started blocking AI crawlers like GPTBot. It’s an understandable impulse. Bots consume bandwidth, raise server costs, and there are legitimate questions about how that data is being used.

But Schlesser is blunt about what blocking AI crawlers actually means for long-term discoverability: “It just means that you don’t exist.”

The analogy he draws is instructive: you wouldn’t have blocked Googlebot in 2005 just because it was new and unfamiliar. Blocking AI crawlers now is the same category of mistake. You’re opting yourself out of a discovery channel that will only grow in importance.

For brands concerned about server load and bandwidth costs, there are better approaches than blanket blocking. Yoast has been working on exactly this kind of infrastructure. They’ve added support for llms.txt—a lightweight file that enables quick, efficient content discovery by AI systems—and recently entered into a partnership with Microsoft around the NL Web Protocol, a new standard similar to sitemaps that lets AI systems get a structured overview of an entire website through a single efficient endpoint.

The goal isn’t to make your site accessible to every bot indiscriminately. It’s to be discoverable by the AI systems your future customers are using, while managing bandwidth intelligently.

Further reading: Peter Rota on why technical SEO is more important than ever.

Why Traffic Is the Wrong Thing to Optimize For

One of the more grounding moments in the conversation came when Schlesser talked about what actually happens when customers come to Yoast asking for help.

“Customers usually come with the wrong goals in mind,” he said. “Often it’s: I want to increase traffic to my website. Why? What is the business goal?”

More traffic, he pointed out, means higher server costs. That’s the only guaranteed result of more traffic. Unpacking the why behind the traffic goal, whether it’s more orders or more brand awareness, often reveals that raw traffic isn’t the right metric to optimize for at all.

This matters especially now, as AI reshapes the relationship between content and conversion. As more transactions move through automated systems and AI agents rather than human-driven browsing sessions, the question of whether a website is designed for human visitors becomes genuinely complicated. Schlesser’s advice: start measuring bot-driven transactions now, even if they’re a small fraction of your volume. You don’t want to miss the inflection point where automated traffic starts to exceed human traffic for your business.

Looking Ahead Over The Next 12–18 Months

Asked about the near-term outlook for businesses navigating AI-driven discovery, Schlesser didn’t sugarcoat it.

“A lot of things will get worse before they get better,” he said. “But it’s not something anyone can stop.”

His framing: imagine rising sea levels. Businesses that stay above the waterline now will be lifted as the tide rises. Businesses that wait too long to adapt will find it exponentially harder to recover.

For slower-moving organizations, the most urgent first step isn’t to build a comprehensive AI visibility strategy overnight. It’s to establish your own real-time metrics so you can see what’s actually happening in your specific industry or for your specific audience, rather than waiting for industry reports or analyst coverage that will always lag the reality on the ground.

“You want to have your own metrics that you’ve seen in real time, that you want to make decisions on,” Schlesser said. “Not wait for someone else to write a book on it.”

Key Takeaways

  • AI is now a filter in front of search, not a replacement for it. Traditional SEO matters more than ever, but you need to optimize for the AI layer on top of it, not just the search index beneath it.
  • LLMs only see the top 3–5 search results. There is no page two. Being outside the top results means not existing for AI-powered answer engines.
  • Being safe with your content isn’t enough. Distinctive brands survive statistical compression; average brands don’t. Being safe and broadly present is no longer a viable strategy in an era where LLM training data compresses everything that isn’t a clear outlier.
  • Blocking AI crawlers is a short-term decision with long-term costs. Better infrastructure solutions exist, such as llms.txt and the NL Web Protocol, that let you manage bot traffic intelligently without opting out of AI discovery entirely.

Don’t wait to start. Establish real-time AI visibility metrics now, rather than waiting for industry reports. Instrument your own data so you can see the inflection points in your business before they hit.

Tune Into the Full Conversation

Listen to the full episode of the Get Discovered podcast wherever you get your podcasts. Make sure to subscribe so you don’t miss future episodes with SEO experts and business leaders navigating the AI discovery challenge in real time. To connect with Alain, visit Yoast or find him on LinkedIn.

To make sure your site is visible to LLMs, try Prerender.io for free. We make sure your site is visible to ChatGPT, Perplexity, Claude, and every AI search platform your buyers are using.

]]>
The Most Overlooked Aspect of Your AI Search Strategy: A Conversation with Peter Rota https://prerender.io/blog/technical-seo-for-ai-search-peter-rota/ Wed, 25 Feb 2026 17:49:18 +0000 https://prerender.io/?p=7580 If you’ve been watching the SEO conversation play out over the past 12 months, you’ve likely noticed a shift in tone. Less chasing algorithm updates. More getting back to fundamentals. Peter Rota—technical SEO expert, 15-year industry veteran, and LinkedIn thought leader with 80,000 followers—is one of the clearest voices making this argument. And he makes it well.

Peter joined host Joe Walsh on the latest episode of Get Discovered, Prerender.io’s podcast about AI and online discoverability. The conversation covered where technical SEO stands today, what AI has actually broken in the traditional playbook, and why the teams that stay grounded in first principles are the ones that will come out ahead.

Watch the full episode below, or keep reading for a summary.

It’s Not AI vs. Google: It’s How Can You Show Up in Both

Peter opened with a stat that reframes the whole conversation: a Semrush study using clickstream data showed that people who started using LLMs actually increased their Google search sessions. AI didn’t cannibalize search. It complicated it.

“People are trying to ask direct questions, or they’re trying to validate things that are coming up in LLMs,” Peter explained. The buyer’s journey has gotten longer and messier, not shorter. That has big implications for how teams measure success.

For SEO practitioners and business leaders alike, this dismantles the “AI vs. Google” framing that still shows up in a lot of industry commentary. The more useful question isn’t which one wins. It’s how do you show up in both, and how do you measure what’s actually working across a fragmented path to purchase?

This challenge of fragmented attribution came up repeatedly throughout the conversation — and it’s one we’ve explored in depth with other guests this season. Klaus Schremser from Otterly AI made a similar point about the difficulty of tracking visibility across AI channels, and Elizabeth Thorn from Toggl described how her team built custom AI referral tracking to fill the gap.

The Biggest Shift in 15 Years of SEO

When Joe asked how this AI disruption compares to other seismic changes, Peter didn’t hesitate. This is the most significant shift he’s seen since starting in the industry. The reason? It fundamentally changes how success is defined.

“The metrics have always been clicks and rankings,” he said. “Now it’s more like: am I visible here? Conversions are getting more important. It’s just the overall understanding of what is important now versus what was important five years ago.”

That’s not a trivial reframe. It affects how you report to leadership, how you structure content strategy, and how you justify the budget. Traffic as the primary KPI made sense when traffic was a reliable proxy for visibility and intent. In a world where LLMs answer questions without sending a click, traffic numbers can flatline while your brand influence grows—or erodes.

What Industries Are Most Impacted by Generative AI?

Peter was candid about which content types are feeling the most pain right now: news publishers and content-heavy sites. “Glossary and blog sections have been hit very hard year over year because they’re being surfaced in AI Overviews or LLMs, so people aren’t clicking through to the website.”

Informational content is increasingly being answered before the user ever reaches your site. Peter’s take isn’t to abandon that content, but to shift the framing: create content that’s genuinely useful for the user, not content that exists primarily to capture search engine visits.

It’s a subtle but important distinction that echoes what Noah Greenberg from Stacker discussed earlier this season when unpacking what actually gets cited in LLMs — authoritative, specific, citable content that earns third-party validation.

What’s Crucial for AI Search Visibility: Technical SEO Fundamentals

This is where the conversation got particularly useful. Peter’s view is that good technical SEO and good AI discoverability are largely the same thing—and most of the ranking factors that matter more for LLMs are factors that great SEOs have always cared about. They’ve just been getting less attention. A few of the areas Peter flagged as increasingly critical:

  • Semantic HTML. LLMs, unlike Google, mostly can’t render JavaScript. That means they’re reading your raw HTML. If your content is buried in divs instead of proper semantic tags—headers, lists, footers, article blocks—you’re harder to parse and more likely to be skipped.
  • Schema markup. Peter described this as getting “a second light,” something that was always a good practice but is now more essential. Anything that helps LLMs more easily understand your site’s structure, relationships, and meaning is worth investing in. Our own technical SEO guide for AI optimization goes deep on this.
  • JavaScript rendering. This one is particularly close to home for us. Most LLMs (with a few notable exceptions) cannot execute JavaScript. If your key content lives in JavaScript-rendered components, as is the case with many modern websites, it may as well be invisible to them. Google has invested enormous infrastructure into rendering JavaScript at scale and still doesn’t render everything. The idea that AI crawlers will do the same is, as Joe put it during the conversation, “a non-starter.”
  • Page speed, and specifically time to first byte. LLMs using retrieval-augmented generation visit sites in real time when constructing answers. If your site doesn’t load fast enough, they move on. Peter’s recommendation is to target three to four seconds or under. “They’re almost like they have unlimited crawl budget,” he said. “They’re just gonna go somewhere else if they can’t get you.”
  • Crawl logs. Peter said he now considers crawl log analysis a proactive practice rather than a reactive one. With Google Search Console, you can see which bots are visiting, what they’re finding, and whether there are unresolved requests. This is especially important now because AI crawler behavior is still not fully understood, and the more data you have, the better your decisions will be.

LLMs Care More About Brand Authority Than Search Engines

One of the most interesting threads in the conversation was about how LLMs evaluate credibility differently from Google.

Google’s algorithm has always valued backlinks as a proxy for authority, and gaming that system became its own cottage industry. LLMs think differently. They’re looking for consensus. They synthesize what’s being said across micro-communities, forums, review sites, social platforms, and the broader web. A brand with strong domain authority but a weak presence in third-party conversations may rank well in Google while being largely ignored by AI.

“LLMs are a lot more consensus-based,” Peter said. “They’re looking for branded signals because they want to serve something they’re confident in serving.”

This makes brand reputation management and PR strategy more important than ever. Getting mentioned in a Reddit thread might not produce a backlink. But it can influence how an LLM describes your brand. That’s a new kind of value that most SEO tools haven’t caught up to yet.

Peter also noted that the no-follow link vs. follow link binary that dominated SEO thinking is less relevant in this context. A no-follow mention on a high-authority site may actually be more valuable for LLM visibility than a follow link that doesn’t generate real contextual resonance. The metric that matters is signal strength, not link attribute.

The Attribution Problem (Again)

Attribution has always been difficult for SEO. You could see that organic traffic increased and correlate it with content efforts, but drawing a direct line from specific pages to specific revenue was never clean. AI makes this messier—a common theme in this season of the podcast.

Peter framed it well: “A lot of times, SEO was always harder to even get buy-in on attribution because it’s like, I did these paid ads and I can clearly see where the conversion was. With SEO, we know it’s working, but it was hard to show the value. And obviously, LLMs makes it even more complicated.”

Joe added a useful analogy: we’ve essentially moved from the attribution precision of digital advertising back toward something closer to billboard marketing. You can measure general uplift. You can observe trends. But the clean, last-click-attribution model that made Google’s ad network so appealing to CFOs? That era is fading.

For now, Peter’s approach is to report on what leadership understands: referrals, sessions, and conversions over time, while keeping an eye on visibility metrics as leading indicators, even if they’re imperfect.

What Teams Should Prioritize for AI Search Visibility

When asked to give concrete guidance on where to focus, Peter named a few key areas:

Prioritize:

  • Deeper schema investment, particularly to establish entity relationships and corroborate your brand’s “entity home” (a well-structured About page that’s then validated across the web).
  • Raw HTML content. Make sure your most important copy isn’t dependent on JavaScript.
  • Page speed.
  • Crawl log analysis.
  • Your overall brand presence in communities and third-party publications.

Deprioritize:

  • The “all-or-nothing” mentality around traffic as the primary metric.
  • Backlinks as the exclusive measure of off-site authority.
  • The expectation that recovering pre-AI traffic levels is a realistic goal. “LLMs will never send the amount of traffic Google sends,” Peter said. “I don’t think we can get the traffic back. It’s just setting that clear expectation.”

This mirrors advice we’ve heard across multiple guests this season, including from Ané Wiese at saas.group, who offered practical implementation tips for SEO teams navigating the same transition.

Looking Ahead

Looking out 12 to 18 months, Peter’s prediction is that organic clicks will continue to decline. This is not only because of AI, but because Google itself is adding more advertising to AI-powered search experiences.

But he raised a more nuanced concern beyond the traffic numbers: the erosion of critical thinking in teams that lean too hard on LLMs.

“I feel like people will look for more and more shiny objects to solve things and they lose that critical thinking,” he said. “LLMs will lie to your face and won’t even acknowledge it. You just really need to use your brain with a lot of these tools.”

Joe echoed this, noting that how humans relate to these tools—and where the actual value boundaries are—is something we’re still figuring out as an industry.

Key Takeaways

We asked Peter what he’d want listeners to remember from the conversation. His answer was clean: don’t forget the fundamentals of SEO.

“A lot of those will give you the 80/20 of showing up in LLMs,” he said. “A lot of the AI optimization principles evolved from SEO. A lot of them are SEO that’s just been repackaged. If you’re doing good SEO, you’ll definitely be off to a good start.”

This is a message that holds up across every guest we’ve spoken to this season. Getting back to first principles, like technically sound sites, authoritative content, and genuine brand presence, is not a retreat. It’s the actual strategy.

Tune Into the Full Conversation

This recap covers the highlights, but the full conversation goes deeper into crawl log analysis, visibility tracking tools and their limitations, prompt overlap, and what the next 12 months look like for SEO practitioners. Listen to the full episode on Get Discovered wherever you get your podcasts.

Subscribe to Get Discovered so you don’t miss future episodes with SEO experts and business leaders navigating the AI discovery challenge in real time.

You can also find Peter on LinkedIn or at his website, where he works with businesses on local, e-commerce, and technical SEO.

Make Sure AI Crawlers Can Actually Find You

Peter’s point about JavaScript and LLM visibility is one we hear consistently from guests, and it’s one we’ve built our product around. Most AI crawlers can’t execute JavaScript, which means if your content is JavaScript-rendered, it’s invisible to them by default.

Try Prerender.io for free and make sure your site is discoverable to ChatGPT, Perplexity, Claude, and every AI search platform your buyers are using.

]]>
How Toggl Is Adapting to the AI Era: Elizabeth Thorn, Head of Marketing at Toggl https://prerender.io/blog/podcast-how-toggl-is-adapting-to-the-ai-era/ Wed, 18 Feb 2026 15:16:46 +0000 https://prerender.io/?p=7514 AI is impacting every business department, but arguably, marketing has been one of the hardest hit. For Elizabeth Thorn, Head of Marketing at Toggl, a suite of productivity tools, this shift has been anything but gradual. Speaking with host Joe Walsh on the Get Discovered podcast, Elizabeth breaks down what marketing looks like now for her and her team, how they’re adapting to the AI era, and why SEO is more about brand management than ever before.

Watch the full episode below, or keep reading for a summary of the episode.

The Core Problem: You’ve Lost Control of Your Message

For the past 15 to 20 years, the SEO playbook was clear. Optimize your site, build authority, earn backlinks, and you could largely control how your brand showed up in search. That model is broken, Elizabeth says. Good SEO has always been about ecosystem marketing. But it’s different now.

AI models don’t just read your website. They synthesize information from Reddit threads, G2 reviews, YouTube teardowns, LinkedIn posts, third-party blogs, and more. If there are 50 Reddit forum posts or YouTube videos saying your customer service is poor, no amount of on-site optimization will fix how an LLM describes your brand.

“I think the misconception is that we have more control than we do,” Elizabeth said. “We’re used to controlling our message on our owned properties. That’s just not the case anymore.”

This is one of the fundamental challenges of AI search visibility that many marketing teams are still catching up to. The question isn’t just whether search engines can find your content. It’s what are they saying about you when they find it?

What Toggl Is Doing Differently With Marketing

Elizabeth joined Toggl in late 2023. In less than two years, the content strategy shifted dramatically. Here’s what changed:

1. They narrowed their keyword focus

Instead of chasing hundreds of terms across broad topic clusters, Toggl now focuses on what Elizabeth calls “tier one keywords.” These are high-intent terms that are actually tied to revenue.

The research behind those terms goes beyond traditional tools. They’re looking at subreddits, LinkedIn conversations, and customer interviews to understand the language their ICP actually uses, not just the terms that rank well in Ahrefs.

This is the kind of practitioner insight that aligns with AI SEO optimization best practices: AI search rewards content that genuinely matches how real people talk about their problems, not just how marketers have historically framed keywords.

2. They killed high-volume content production

Toggl used to publish 30–50 blog posts a month. That era is over. “We’d rather publish one phenomenal piece per month that gets cited everywhere than 12 mediocre posts,” Elizabeth said.

The research backs this up. A significant portion of the sites that show up in LLM citations rarely rank in Google’s top results, suggesting that volume doesn’t equal visibility in AI search. On the other hand, quality, citability, and authority do.

3. SEO became an ecosystem, not a silo

This might be the most important shift. At Toggl, the partnerships manager now sits in on SEO and content strategy calls. Paid, content, partnerships, and product marketing all coordinate around shared campaigns. SEO has always been holistic, but omnichannel SEO is taking precedence.

Why? Because how AI synthesizes your brand reputation depends on what’s being said about you everywhere, not just on your own site. Third-party mentions, influencer content, and review platforms all feed into the picture that an LLM builds of your brand.

“SEO isn’t just SEO in isolation anymore,” Elizabeth said. “It has to be integrated with our partnerships manager, our content manager, our product marketing… All of it.”

Metrics That Are Losing Meaning

Elizabeth and Joe also discuss metrics that may not have as much value anymore—organic traffic being an obvious one. With the rise of zero-click search, users get answers without ever visiting your site. Traffic numbers that once signaled success can now be misleading signals, a similar topic we explored in our conversation with Klaus-M. Schremser, cofounder of AI search tool OtterlyAI.

But Elizabeth highlighted something more actionable: Toggl built AI referral tracking into their CRM to actually measure what’s converting, not just what’s clicking. The result? About 50% of deals closed last quarter came from AI search. And they tend to be larger accounts (teams of 50+ users). Organic traffic dropped roughly 10% year-over-year, but the pipeline held steady.

That’s only visible if you have the tracking in place. For JavaScript-heavy SaaS sites, this starts at the foundation: making sure AI crawlers can actually access and index your content in the first place. Most AI crawlers can’t execute JavaScript, which means a significant portion of your site may be invisible to them by default. Winning in the AI search era may mean shifting focus from vanity metrics to technical foundations.

Common Misconceptions About How AI Impacts SEO

Elizabeth further delved into her misconceptions about AI visibility today.

Misconception 1: AI search is the same as traditional SEO.

At least not entirely. The fundamentals of quality content and technical foundations apply, but the playbook has changed. Technical factors like structured data, semantic markup, and dynamic rendering are more important than they were before. But increasingly, factors like brand management, new channel monitoring, and cross-functional collaboration are fundamental for a strong AEO presence. The content checklist that worked in 2022 won’t produce the same results in 2026.

Misconception 2: If AI recommends you, it’s a quality lead.

Not always. Elizabeth referenced a point from SEO agency Growth Plays, which noted that a significant portion of leads coming in via AI search are unqualified because users are blindly trusting what LLMs tell them without doing their own research. AI answers can hallucinate pricing, misrepresent features, and recommend products based on outdated information.

This is exactly why making sure your content is crawlable and up-to-date for AI matters so much. If an AI model is pulling cached or incomplete information about your product, it affects the quality of leads coming through the door.

What Gets Worse Before It Gets Better

When asked what part of the discoverability landscape she expects to deteriorate first, Elizabeth didn’t point to a technical factor. She pointed to internal panic.

“There’s a lot of time being wasted on things we just don’t have answers to,” she said. “Teams are going to waste resources trying to game a system that isn’t fully defined yet.”

Her advice: form a hypothesis, take three concrete next steps, communicate proactively with leadership, and adjust as you learn. Don’t wait for a perfect playbook that doesn’t exist yet.

The teams that will win in AI search are the ones who get back to first principles now: building genuine authority, creating content that answers real questions, and making sure their technical foundations are solid enough to be discovered in the first place.

The Takeaway

AI discovery isn’t about hacking a new algorithm. It’s about building genuine brand authority consistently across every surface where your audience is paying attention.

That means great content. It means a product that actually delights users. It means showing up in places you don’t have complete control: reviews, communities, and third-party publications. And it also means making sure the technical side is airtight. If AI crawlers can’t access your content, none of the above matters.

“You have to earn people’s trust in a really authentic way,” Elizabeth said. “And I think that’s actually more exciting for marketers. It gets us back to what our jobs are actually about.”

Tune Into the Full Conversation

This recap only scratches the surface. Elizabeth goes deeper on ecosystem marketing strategy, how Toggl tracks AI-driven pipeline, and what she’s watching closely over the next 12–18 months. Listen to the full episode on Get Discovered to dive deeper into AI, SEO, and online discoverability.

Subscribe wherever you get your podcasts so you don’t miss future episodes with business leaders navigating these AI discovery challenges in real time.

Make Sure AI Can Actually Find You

If you’re rethinking your AI visibility strategy, the technical foundation matters just as much as the content strategy. AI crawlers can’t execute JavaScript, meaning your product pages, blog posts, and key landing pages may not be indexed in AI search at all.

Try Prerender.io for free to make sure your content is visible to ChatGPT, Perplexity, Claude, and every other AI search platform your buyers are using.

]]>
What Gets Cited in LLMs: A Conversation with Noah Greenberg from Stacker https://prerender.io/blog/podcast-noah-greenberg-stacker/ Wed, 11 Feb 2026 13:16:46 +0000 https://prerender.io/?p=7238 In this episode of Get Discovered, we sat down with Noah Greenberg, founder and CEO of Stacker. Noah brings a fresh, unique perspective on how AI is impacting the content marketing space, and shares his thoughts on what brands need to do to stay visible.

Watch Noah’s full conversation with us here, or read on for the Sparknotes.

Why No One Can Track Attribution Anymore

We start each Season 2 episode by asking guests to share one thing they learned this week in AI news. Noah kicked off our conversation by highlighting Kevin Indig’s piece “The Great Decoupling,” a data-driven analysis that exposes a paradox rattling marketing teams everywhere: clicks are declining, but leads are increasing at statistically significant rates.

This isn’t a minor shift. It’s the collapse of the attribution model that’s defined digital marketing for a decade.

“We’ve lived in this world with Google where the number of clicks you get equals the amount of business you get,” Noah explained. “But we’re entering this new world where maybe someone searches on ChatGPT and sees a brand mentioned, but they don’t click through to you. Maybe they go to Google and click there, so Google gets the attribution. Or maybe they go directly to your website, and we all know how accurate direct traffic can be in analytics.”

This attribution crisis is a recurring theme throughout Season 2. “I genuinely don’t know how to measure attribution anymore,” host and Prerender.io CEO Joe Walsh admitted in our episode with Klaus Schremser from Otterly AI. It’s not just a technical problem: it’s an existential one for teams trying to prove ROI and make strategic decisions.

When we asked Noah how widespread this understanding is across the industry, he didn’t sugarcoat it: “If we were in a baseball game where the ninth inning is when everyone realizes there’s more than just trackable clicks, I would say we’re in the third inning at best.”

Translation: most marketing teams are still optimizing for a world that no longer exists.

How Your Content Strategy Should Change Because of AI

For now, one of the biggest mistakes Noah sees brands making is being stuck in a mindset that worked for the past decade: creating content meant to convert. “A lot of brands are still building content where when you read that article, you’ll say, ‘I think I’d like to buy this product,'” Noah said. “What we’ll call content marketing or demand gen content.”

But in his opinion, LLMs have made this type of content largely ineffective. When someone searches for “types of shoes I need for running a marathon,” they’re no longer finding brand content. They’re getting an answer synthesized from archived internet content.

The brands winning today are taking a different approach: becoming the media companies their potential customers turn to. They’re telling stories about training regimens of marathon winners or documenting someone’s journey from casual runner to marathon runner. It’s about establishing authority and trust, not driving immediate conversions.

“While a dozen companies have been doing this for 10 years and have nailed it,” Noah noted, “the volume of marketers realizing this is a strong play has snowballed in the past 24 months.”

What Type of Content Actually Gets Cited in LLMs

When it comes to showing up in AI-powered search results, Noah emphasized two consistent patterns over the past six to nine months:

  1. Uniqueness matters. If there are thousands of pieces of content saying the same thing, LLMs will just give the answer without sourcing anyone. But if you produce a brand new perspective—through subject matter experts, survey data, or proprietary first-party data—that’s what gets cited.
  2. Data is the cheat code. Whether it’s Redfin publishing home price data or Apollo sharing research on the best time to email a CEO, content with statistics consistently gets cited with proper attribution.

But Noah was careful to add an important caveat: “It changes so frequently right now that I don’t feel comfortable making any blanket statements.” His advice? Do your own primary research. Go into these LLMs, ask queries in your space, and see who’s actually being cited.

From Noah’s perspective, these changes may have some positive benefits. In his mind, AI search might actually level the playing field for newer brands.

“On Google, if you were a startup without good domain authority, it was going to be seven months until you hoped to rank,” Noah explained. “Whereas with these LLMs, you can put out a piece of content, and if it’s great and well-structured and unique, 24 hours later, you can start getting cited. It removes the moat that legacy brands had with SEO.”

Writers Won’t Be Replaced by AI

Despite early scares about AI replacing writers and editors, Noah doesn’t see this happening in brand journalism anytime soon. The economics are just different.

“If you’re a publisher, your business model is ad-supported. You need to make at least a penny more for every piece of content,” he said. “That’s not the case when you’re Salesforce thinking about making five or six great stories that make a potential customer more likely to buy when they’re thinking about a CRM.”

The goal for brands investing in content is to tell unique stories with much lower volume but much higher impact. And that requires human creativity, judgment, and storytelling ability.

The Biggest Misconception About AI Visibility

When we asked about the biggest misconception around AI visibility today, Noah brought it back to attribution—again.

“The amount of money being spent on these products isn’t going down,” he said. “There’s still the same pie, if not bigger. Someone might be suggested your product on ChatGPT, then walk into your store and buy, and you’re never going to know they found you on an LLM. But if you agree that five years from now the amount of searches on these platforms will be way bigger, and billions of decisions will be influenced, then you have to find a way to get behind this strategy. Even if you can’t track the clicks today.”

Looking ahead 12 to 18 months, Noah predicts that the rapid pace of change will actually become more challenging, not less.

“Over the past year, it was generally accepted that no one knew what they were doing with AEO and GEO—these acronyms didn’t even exist a year ago,” he said. “But you’re going to start having people who build credibility and set strategies in stone without acknowledging that the rules will be very different a year from now.”

His advice? If you’re setting an AI search strategy, build in the intention to come back to the table every three months and reassess what’s changed. This isn’t SEO, where the fundamentals remained relatively stable for 20 years.

Further reading: How to Optimize Your Website for AI Search – A Technical AI Optimization Guide

Key Takeaway: What Works Today May Not Tomorrow

Noah left us with one key takeaway: “You have to do your own primary research and recognize that what works today may or may not work six months from now.”

Unlike the early days of Google, where Matt Cutts publicly shared what you should and shouldn’t do to win on Google, OpenAI and Anthropic don’t provide that guidance. There are brilliant people doing primary research and sharing their findings, but every brand needs to test, learn, and adapt continuously.

The game has changed. The question is whether you’re willing to change with it.


To learn more about Stacker, connect with Noah on LinkedIn, or head over to Stacker’s website.

If you’re curious about unpacking the AI visibility crisis we’re all facing, follow the Get Discovered podcast on Spotify, Apple, or on our podcast page to tune into the conversation. We speak with business leaders, marketing experts, SEOs, and CEOs, just like Noah.

And if you’re looking for a solution to help ensure your content shows up on Google, ChatGPT, or Perplexity, Prerender.io is your go-to.

]]>
From SEO to GEO: A Conversation with Klaus Schremser from Otterly AI https://prerender.io/blog/podcast-klaus-schremser-otterlyai/ Wed, 04 Feb 2026 11:45:01 +0000 https://prerender.io/?p=6932 The rules of online discovery are being rewritten in real-time, and most businesses are still playing by the old playbook. Klaus Schremser, co-founder of Otterly AI, joins us for the first episode of Season Two of the Get Discovered podcast. In this inaugural episode, we dive into what he’s seeing on the front lines—both as someone helping companies navigate AI search and as a founder struggling with the same challenges.

This conversation is about identifying gaps, calling out misconceptions, and pinpointing what actually works when traditional metrics start falling apart.

Watch the full conversation, or read on for a summary.

The Market Is Shifting Faster Than Anyone Expected

Remember when everyone thought ChatGPT was going to completely dethrone Google? That narrative has gotten a lot more complicated. According to recent data from SimilarWeb, Gemini’s web traffic has jumped from around 6% to 22%, while ChatGPT dropped from 90% to 65%.

“Google is coming back. Or, comeback is maybe a little bit of a big word because Google is already so huge,” Klaus said. “But as ChatGPT picked up the chase and was really after them, now Google is trying to put some countermeasures into it.”

The takeaway? The market is too volatile for confident predictions. What works today might not work next month. And that uncertainty is exactly why businesses need to start learning the rules of this new game now, not later.

The False Comfort of Familiarity

One of the biggest problems Klaus sees with clients is that they’re bringing SEO assumptions into a completely different playing field.

“It should never have been named SEO. It should have been ‘Google optimization,’” Klaus says. “There was only one search engine that you really had to care for. Now businesses have to refocus on several AI engines.”

And it’s not just multiple platforms. Google itself now consists of three different approaches: traditional search with blue links, Gemini, and Google AI Overviews. The old playbook of ranking #1 on Google doesn’t guarantee visibility anymore.

The problem gets worse when teams start measuring their AI visibility. According to Klaus, many companies ask AI search engines about their own brand, and as a result, get their brand back—assuming they’re doing great.

“We have a very nice saying in Austria. It translates badly, but I’m still saying it,” Klaus explained. “How you shout into the woods, it echoes back. If you’re asking for your brand, you get your brand back. So it gives you a false impression that you’re super visible.”

Further reading: why Popken Fashion Group uses Prerender.io for their AI visibility.

What SEO/GEO Metrics Still Matter? And What Metrics Don’t?

The metrics that drove SEO strategy for two decades are losing their predictive power. 

Klaus pointed to several metrics that teams still cling to, even though they don’t reflect the full reality.

What Matters Less 

  1. Page rank: Being #1 in traditional Google search doesn’t mean you’re ranking high in AI searches. The correlation has broken down.
  2. Organic traffic: The old way of looking at traffic data in Google Analytics doesn’t capture what’s actually happening. “We have 70,000 to 80,000 visitors per month and only 700 to 800 visits from ChatGPT where people clicked on the citation,” Klaus said. “However, there are studies that say this is actually only 1% of the people who had a conversation on ChatGPT. So if you multiply it by 100, you come to 80,000 people discussing topics that are relevant for.”
  3. Conversion rates: Attribution is messier than ever. Klaus shared his own experience: “I bought my electric vehicle recently. I didn’t buy it over AI searches—ChatGPT didn’t give me a buy link—but I researched it. When I had my three favorite ones, I only took the ones that ChatGPT referred me to. Then I just opened the browser and typed in the brand name. The conversion is gone.

So what should teams focus on instead?

  1. Brand mentions over URL rankings
  2. Citation frequency, or how often your content is being pulled into AI responses
  3. Share of voice across different prompt categories
  4. AI crawler traffic, not just traffic from human web visitors. (Prerender.io is a way you can do this.)

Further reading: SEO vs. GEO vs. AIO: Understanding the New Search Landscape

A Hack for LLM Visibility: Wikipedia 

Getting onto Wikipedia isn’t easy for a small startup. Most entries get deleted quickly. But Klaus and his team made it happen. And it had an immediate impact.

“AI confused us with another company that has a similar name,” Klaus explained. “We hired a consultant to get us onto Wikipedia. Since then, AI better understood what we are because it always looks to Wikipedia—not first, but very often.”

Cost? $250 plus the effort of working with the consultant and securing the news coverage to support the entry. For a bootstrap startup, that’s a bargain compared to traditional marketing channels.

The Problem with Attribution

“I’m legitimately clueless about how we do attribution now,” host Joe Walsh admitted during the conversation. “It’s not that traditional metrics aren’t relevant anymore, but I think it’s something where we need to figure out how we fill out more detail in the picture.”

Klaus agreed: “The conversion rate is the biggest problem because there is not really a specific conversion rate replacement when you talk about AI search. The only one is someone who clicked on the citation links and came to my website, but that only covers 1% of users.”

This is the uncomfortable truth: we’re flying blind on some of the most fundamental business questions. How do we measure ROI? How do we attribute the investments we’re making into content? How do we know what’s working?

The answers aren’t clear yet. But waiting for perfect clarity means ceding ground to competitors who are learning by doing.

Why JavaScript Rendering is Crucial for AI Search Visibility

Here’s something most teams aren’t thinking about: AI crawlers don’t render JavaScript the way Google does.

“OpenAI has around 100 partnerships with huge media companies,” Klaus pointed out. “But when it comes to crawling your site, AI is not good at rendering JavaScript. Software solutions like Prerender.io come into play, where you should think about pre-rendering and making it easily accessible for AI crawlers to read your content.”

And it’s not getting better anytime soon. “These are companies whose profitability is nonexistent,” Joe added. “The idea that they’re going to add compute resources to start pre-rendering for their web search, which they’re already losing money on, is just not going to happen.”

Bottom line: if your site is JavaScript-heavy and you’re not pre-rendering, AI crawlers probably aren’t seeing your content properly. That’s a fixable problem, but only if you know it exists.

What Gets Worse Before It Gets Better

Klaus didn’t sugarcoat the challenges ahead. 

  1. Smaller brands will struggle more. “What gets harder for smaller brands is that bigger brands automatically get better cited. You are working against billions of data sets.”
  2. Data accuracy remains questionable. Without access to something like “ChatGPT Search Console” showing real user prompts, teams are making educated guesses about what questions their customers are actually asking.
  3. The paid ads question. At some point very soon, advertising will become part of the AI search revenue model. What that looks like—and whether it destroys user trust—remains an open question.

“What happens if a competitor appears in an answer about you because they paid for it?” Klaus asked. “Can you trust the answer? I don’t know. We will see another disruption—or maybe eruption—of how AI visibility works.”

The Agentic Website Future

One of the most interesting ideas Klaus shared was about agentic websites: sites that don’t just serve static content but actually communicate with AI agents.

“Imagine pre-render as the middleware between the user or the agent coming to your website,” Klaus suggested. “You’re not just giving them a pre-rendered website, but you’re actually able to talk to the agent that is visiting you.”

Picture this: an AI agent from ChatGPT comes to your surfboard shop looking for the best board for a specific user. Instead of just scraping static data, your site acts as an agent itself—engaging in a conversation, understanding the specific needs, and making a case for why your product is the right fit.

“Your websites become agentic websites. They can talk back,” Klaus said. “If your website becomes an agent and talks back to the agent that comes from OpenAI and says, ‘Hey, we have the best surfboards in the world, especially for Joe because Joe has this right leg in front and does these crazy tricks on the waves,’ then the seller says, ‘Yeah, you’re right. We are buying with you.'”

Given how fast things are moving, it’s closer than we think.

Klaus’ Key Takeaway? Start Now, Even If You’re Not Ready

If there’s one message Klaus hammered home throughout the conversation, it’s this: waiting for perfect clarity is a losing strategy.

“Start early or start now. Learn and stay ahead of the game,” Klaus said. “Even if you can’t commit the full budget or time, write an article, try something, look into the data, try some tools, do some smaller initiatives, and see if it changes somehow.”

The reality is that AI search “is unfortunately harder than SEO,” as Klaus put it. But the alternative—ignoring it until you’re forced to pay attention—is worse.

“The best time to plant a tree was yesterday,” Joe Walsh said. “The second best time to plant a tree is today.”

Klaus agreed: “Absolutely. It will come. Or, it is here.”


Want to learn more about AI search visibility? Subscribe to the Get Discovered podcast for more conversations.

To connect with Klaus, find him on LinkedIn, where he regularly shares insights about what’s working in AI search. Or check out Otterly AI for a free trial to see how your brand actually shows up across AI platforms.

And if you’re running a JavaScript-heavy site, make sure those AI crawlers can actually read your content with a solution like Prerender.io.

]]>
How to Fix Disapproved Google Ads: The Hidden JavaScript Rendering Problem https://prerender.io/blog/how-to-fix-disapproved-google-ads/ Wed, 28 Jan 2026 07:03:03 +0000 https://prerender.io/?p=6883 If your Google Ads keep getting disapproved, you’re probably familiar with messages like “destination not working” and “Low Quality Score.

Most teams treat these as marketing or policy issues. They rewrite copy, tweak layouts, compress images, or resubmit ads, only to get their Google Ads disapproved again. But in many cases, the issue isn’t your messaging.

Google Ads reviews landing pages using a crawler that doesn’t reliably execute JavaScript. If your main content isn’t visible at the moment it fetches the page, Google Ads may flag it as broken, even if it works perfectly for users.

On smaller sites, this might affect a single landing page. On larger JavaScript-heavy sites (such as marketplaces with thousands of product pages), the same issue can affect many URLs simultaneously, across multiple campaigns or languages.

Let’s dive deeper into how to solve disapproved Google Ads and why JavaScript rendering is often the root cause behind repeated ad rejections.

Why Google Ads Disapproves JS Websites

Google Ads doesn’t evaluate landing pages the way humans or even Google Search’s crawler, GoogleBot, does. When you submit an ad, Google Ads uses its own crawler (called AdsBot) to fetch and inspect your landing page.

At a high level, Google Ads checks:

  • Accessibility: can the page be reliably fetched?
  • Visibility: is the main content immediately visible?
  • Usability: does the page appear stable, fast, and functional?
  • Relevance: does the page match what the ad promises?

If any of these checks fail, your Google Ad could be disapproved or restricted, and you may see an error message like this one:

Google Ads destination not working

This is not a judgment of your offer, copy, or design; it is a technical reliability check.

The Rendering Timing Problem on JavaScript Sites

JavaScript-heavy sites often don’t show meaningful content at the exact moment Google Ads fetches the page.

These websites, built with JavaScript frameworks like React, Vue, and Angular, Next.js, or newer “vibe-coded” platforms like Lovable, often rely on client-side rendering. Instead of returning a complete page from the server, they ship a minimal HTML shell and rely on JavaScript to assemble the real content in the browser.

This setup is especially common on modern fashion retailers, marketplaces, and travel sites, where pages are generated dynamically based on products, availability, pricing, location, or language. Instead of a handful of static landing pages, these enterprise-scale sites often operate thousands of URLs that change frequently.

To users, this usually isn’t noticeable. Their browser runs the scripts, loads the data, and renders the page as expected.

Google Ads, however, doesn’t experience your site the way a user does. Its crawlers don’t scroll, wait for long-running JavaScript execution, or interact with your site. They evaluate only what’s immediately available when the page is fetched.

When Google Ads reviews a large catalogue, multi-language, or campaign-driven site, it fetches many different URLs independently and often at unpredictable times. If the main content isn’t visible at that point, Google Ads may interpret the page as thin, broken, or unreliable, even if it looks perfect a few seconds later in a real browser.

What users see vs. what Google AdsBot sees when JS is not rendered

That visibility gap by Google Ads crawlers on JS content is one of the most common reasons JavaScript-heavy websites have their Google Ads disapproved.

From Rendering Errors to Google Ads Quality Score Penalties

So, you might be wondering: how does Google Ads review landing pages?

When Google Ads can’t consistently fetch or interpret your landing page, that failure feeds into how your page is classified and scored across Google Ads systems. Rendering issues don’t live in isolation; they influence how trustworthy, usable, and reliable your landing page appears to Google’s automated reviewers.

On JavaScript-heavy sites with thousands of pages, like ecommerce, this inconsistency often doesn’t affect just one URL. The same rendering gap can surface across dozens or hundreds of campaign URLs, making ad approvals unpredictable and hard to troubleshoot.

This is where landing page experience and Quality Score come into play.

How Landing Page Experience and Quality Score Work Together for Google Ads

Google Ads doesn’t just check whether your page loads. It evaluates your landing page as part of a broader quality assessment that determines whether your ads are allowed to run, how often they appear, and how much you pay per click.

The assessment is indicated by the Quality Score, which is determined by three components:

  • Expected CTR: how likely users are to click.
  • Ad relevance: how closely your ad matches the query.
  • Landing page experience: what happens after the click.

Most teams focus on improving copy, keywords, and targeting to raise CTR and relevance. But ad disapprovals and delivery issues usually stem from the third component: landing page experience.

Landing page experience isn’t a judgment of how persuasive your page is. It’s an evaluation of whether Google’s systems believe the page is:

  • Accessible and fetchable.
  • Fast enough to load reliably.
  • Stable and usable.
  • Honest and technically trustworthy.

When the landing page experience drops, the Quality Score drops with it. That can lead to higher CPCs, reduced delivery, and, in more severe cases, disapproved Google Ads.

Where Performance, Speed, and Stability Signals Come In

Google Ads doesn’t publish an exact checklist of all the signals it uses for ad landing page optimization. However, it does care about many of the same fundamentals that Google Search measures through performance and usability metrics.

In technical SEO, these are commonly expressed as Core Web Vitals:

  • LCP (Largest Contentful Paint): how quickly the main content appears.
  • CLS (Cumulative Layout Shift): how stable the layout is.
  • INP (Interaction to Next Paint): how responsive the page feels.

While Google Ads doesn’t officially confirm that it uses Core Web Vitals for Google Ads in the same way Google Search does, it clearly evaluates similar signals: ad landing page speed, visual stability, and usability at load time.

This is where JavaScript rendering for Google Ads often becomes the breaking point.

Related: What Are Core Web Vitals and How to Improve Them

How JavaScript Rendering Issues Show Up in Google Ads

When something goes wrong during evaluation, Google Ads doesn’t call it a JavaScript rendering issue. Instead, it surfaces as a small set of vague, recurring errors inside the Ads interface.

Below are the most common ones and what they typically indicate.

1. “Destination not Working” or “Destination not Crawlable”

This doesn’t always mean your site is down. It often means Google Ads’ crawlers can’t reliably fetch or interpret your landing page.

Common triggers include:

  • Empty or near-empty initial HTML.
  • Core content loading only after JavaScript executes.
  • Blocked or delayed JS, CSS, or API requests.
  • Slow hydration.
  • Bot traffic blocked by WAF or security rules.

2. “Poor Landing Page Experience”

This is one of the most misunderstood warnings. It’s not just about design or conversion rate. It’s about whether Google Ads’ crawlers can clearly understand what the page is and what it offers.

It often appears when:

  • Main content isn’t visible at fetch time.
  • Headings or key details are missing.
  • Page structure or metadata is incomplete.
  • Important content loads too late.

3. “Low Quality Score”

Low Quality Score is often blamed on weak copy or poor UX, but landing page experience is one of its core components. And if Google Ads’ crawlers struggle to interpret your page consistently, that evaluation can suffer. Slow responses, unstable layouts, or missing content at load can all degrade Quality Score, regardless of how strong your copy or targeting is.

How to Fix Google Ads Disapprovals Caused by JavaScript

Once you recognize the pattern, the fix becomes clearer. You don’t need another round of copy tweaks, creative tests, or bid changes. You need Google Ads to consistently access the same content your users see.

Two Ways to Fix Rendering Issues That Break Google Ads

ApproachHow it worksProsConsBest for
Server-Side Rendering (SSR)The server sends fully rendered HTML for every request.
  • Native architectural solution.
  • Fully readable by Google Ads crawlers.
  • Strong long-term foundation.
  • Requires major frontend changes.
  • Expensive to implement.
  • Slows product velocity.
  • Adds long-term maintenance overhead.
  • Risky during migrations or replatforming.
Teams that are rebuilding their frontend anyway and are willing to invest heavily in architecture.
Prerendering with Prerender.ioGoogle Ads crawlers receive fully rendered HTML snapshots, while users keep the dynamic JavaScript experience.
  • Crawlers see complete content immediately
  • Users keep fast, interactive pages
  • No full rebuild required
  • No architectural overhaul
  • Minimal engineering effort
  • Not a full architectural rewrite.
  • Teams that need a fast, low-risk fix without rebuilding their frontend.Enterprise sites cater to multiple markets and languages.Frequently updated content.

Read: Explaining the Difference Between Prerendering and Other Rendering Options

How On Fixed Google Ads Disapprovals by Rendering Their JS Website with Prerender.io

On, the global performance footwear and apparel brand ran into this exact issue during a major frontend migration. To users, everything looked fine: pages loaded, products displayed, and the experience felt modern and fast.

But the new frontend wasn’t consistently readable to web crawlers. And that created a cascading set of failures across their acquisition and visibility channels:

  • Repeated disapprovals across Google Shopping, display, and video ads.
  • Inconsistent ad approvals across product and category pages.
  • Indexing issues on key product pages.
  • Broken social previews.
  • Crawlers failing to process core content.

As the team put it, they weren’t just protecting traffic. They were protecting their entire digital marketing system. Once On integrated Prerender.io, their landing pages became reliably readable at evaluation time and within days:

“Prerender.io directly solved our ad disapproval issues.”

Beyond resolving disapprovals, Prerender.io also improved On’s Google Image visibility, ensuring that bots could consistently access and process images across Google Search and Google Shopping Ads.

How Prerender.io improved On’s Google Image visibility

Read the full case study: How On, a Multi-Billion-Dollar Athletics Brand, Saved Millions

Are your Google Ads getting repeatedly disapproved? Talk to our team to see how Prerender.io fixes JavaScript rendering issues that block ad approvals.

Fix Google Ads Disapprovals Without Rebuilding Your Frontend

The issue On faced wasn’t unique, and neither was the fix.

Prerender.io helps Google Ads reliably evaluate your real landing page content at the moment it matters, without changing your frontend stack or user experience. Instead of rebuilding for server-side rendering, teams use Prerender.io to:

  • Make landing pages readable at evaluation time.
  • Eliminate “destination not working” and “poor landing page experience” errors.
  • Stabilize Quality Score and ad delivery.
  • Fix visibility issues across ads, search, and social previews.

It’s a fast, low-risk way to resolve Google Ads disapprovals caused by JavaScript and to prevent them from coming back.

{ “@context”: “https://schema.org”, “@type”: “HowTo”, “@id”: “https://prerender.io/blog/how-to-fix-disapproved-google-ads/#howto”, “name”: “How to Fix Disapproved Google Ads Caused by JavaScript Rendering Issues”, “description”: “Step-by-step guide to fixing Google Ads disapprovals caused by JavaScript rendering issues using server-side rendering or prerendering.”, “image”: “https://prerender.io/wp-content/uploads/2026/02/how-to-fix-disapproved-google-ads.jpg”, “totalTime”: “PT30M”, “estimatedCost”: { “@type”: “MonetaryAmount”, “currency”: “USD”, “value”: “0” }, “supply”: [ { “@type”: “HowToSupply”, “name”: “Access to Google Ads account” }, { “@type”: “HowToSupply”, “name”: “Access to website source code or frontend configuration” } ], “tool”: [ { “@type”: “HowToTool”, “name”: “Google Ads” }, { “@type”: “HowToTool”, “name”: “Chrome DevTools” }, { “@type”: “HowToTool”, “name”: “Prerender.io or Server-Side Rendering solution” } ], “step”: [ { “@type”: “HowToStep”, “name”: “Identify the disapproval reason in Google Ads”, “text”: “Check your Google Ads account for errors such as ‘Destination not working’, ‘Destination not crawlable’, or ‘Poor landing page experience’. Confirm which URLs are affected.”, “url”: “https://prerender.io/blog/how-to-fix-disapproved-google-ads/#identify-disapproval” }, { “@type”: “HowToStep”, “name”: “Verify what Google Ads crawlers see”, “text”: “Inspect the raw HTML response of your landing page to determine whether meaningful content is present without executing JavaScript. If the initial HTML is mostly empty or contains only a shell, rendering issues are likely the cause.”, “url”: “https://prerender.io/blog/how-to-fix-disapproved-google-ads/#verify-crawler-view” }, { “@type”: “HowToStep”, “name”: “Confirm JavaScript rendering delays”, “text”: “Use tools like Chrome DevTools or URL inspection methods to test whether core content loads only after JavaScript execution. If headings, product details, or primary content appear late, AdsBot may not see them.”, “url”: “https://prerender.io/blog/how-to-fix-disapproved-google-ads/#confirm-rendering-delay” }, { “@type”: “HowToStep”, “name”: “Choose a rendering solution”, “text”: “Decide between implementing full server-side rendering (SSR) or using a prerendering solution. SSR requires architectural changes, while prerendering serves fully rendered HTML to crawlers without rebuilding your frontend.”, “url”: “https://prerender.io/blog/how-to-fix-disapproved-google-ads/#choose-rendering-solution” }, { “@type”: “HowToStep”, “name”: “Implement prerendering or SSR”, “text”: “Deploy your selected solution so that Google Ads crawlers consistently receive fully rendered HTML content at fetch time. Ensure AdsBot is not blocked by firewall or security configurations.”, “url”: “https://prerender.io/blog/how-to-fix-disapproved-google-ads/#implement-solution” }, { “@type”: “HowToStep”, “name”: “Resubmit ads and monitor approvals”, “text”: “Resubmit your disapproved ads and monitor their status. Once rendering issues are resolved, landing pages should be reliably evaluated and approvals should stabilize.”, “url”: “https://prerender.io/blog/how-to-fix-disapproved-google-ads/#resubmit-ads” } ] }

Give Prerender.io a try today to fix your Google Ads disapprovals for good.

FAQs: Google Ads Disapprovals and JavaScript Rendering

1. Why Does Google Ads Disapprove JavaScript Websites?

Google Ads disapproves JavaScript websites when its systems can’t reliably evaluate the landing page at fetch time. If core content isn’t visible immediately, Google Ads may treat the page as unreliable even if it works fine for users. This is a core part of why Google Ads disapproves JS websites, and it’s the same technical visibility gap that Prerender.io exists to solve in search, crawling, and AI contexts.

2. How Does Prerender.io Fix Disapproved Google Ads?

Prerender.io fixes disapproved Google Ads by providing ad platform crawlers with fully rendered HTML content from JavaScript-heavy sites. When Google Ads reviews landing pages, JavaScript-generated content often appears blank to crawlers because it hasn’t loaded yet, leading to disapprovals. Prerender.io serves pre-rendered HTML to these crawlers so they can properly evaluate the content.

3. How Does Google Ads Review Landing Pages Differently Than Google Search?

Google Ads crawlers prioritize immediate accessibility and usability at fetch time, whereas Google Search crawlers are more patient and iterative. That said, if your landing pages are built with JavaScript, they are likely invisible to both crawlers. Consequently, your Google Ads get disapproved, and your landing pages won’t show up on Google search.

4. What Does “Google Ads Destination Not Working” Mean?

“Destination not working” usually means Google Ads can’t reliably fetch or interpret your landing page during evaluation. On JavaScript-heavy sites, this happens when the server response lacks complete HTML or when dynamic content loads too late. Rendering the page with Prerender.io, for instance, can solve this issue.

5. Does Prerender.io Also Help with AI Search and AI Crawlers?

Yes, the same rendering issues that affect Google Ads also affect AI-powered crawlers used by tools like ChatGPT, Perplexity, and other AI search systems. Prerender.io ensures that AI crawlers receive fully rendered, machine-readable HTML, making your content easier to process for both paid acquisition systems and AI-driven discovery.

]]>
Guide to Building AI Agent-Friendly Websites https://prerender.io/blog/how-to-build-ai-agent-friendly-websites/ Thu, 22 Jan 2026 07:38:00 +0000 https://prerender.io/?p=6735 If your website still can’t clearly communicate with AI agents, it’s effectively invisible to AI search engines like ChatGPT, Claude, and Gemini. As AI-driven discovery continues to demonstrate its dominance over traditional search, building AI-agent-friendly websites is a core requirement to stay visible in today’s AI-first search landscape.

In this guide, we’ll walk through the shift toward the agentic web and the technical principles of web architecture needed to build an AI-agent-friendly website. By the end, you’ll understand how to create an AI-first website that supports natural language interfaces, structured data consumption, and programmatic web access—ensuring your site serves both human users and the growing ecosystem of AI agents and crawlers.

TL;DR: The Agentic Web Explained and How to Build One

  1. The agentic web is action-driven, not page-driven
    In the agentic web, autonomous AI agents don’t just index content—they understand context, evaluate options, and execute tasks on behalf of users. Websites must expose structure, intent, and actions in a way machines can reliably understand.
  2. AI-agent-friendly websites use dual-interface design
    This means maintaining a human-friendly UX while exposing machine-readable interfaces through semantic HTML, structured data, and programmatic web access (APIs, sitemaps, schemas).
  3. JavaScript sites must be prerendered to be visible to AI agents
    Most AI crawlers can’t execute JavaScript. Tools like Prerender.io serve static HTML snapshots to AI agents while preserving the full JavaScript experience for humans, making it the fastest path to AI visibility.

What Is Agentic Web?

Agentic web explained - Prerender.io

The agentic web describes how websites are increasingly consumed not just by people in browsers, but by autonomous AI agents. Instead of loading pages and parsing markup loosely, these agents interact with websites as systems, expecting structured data, predictable behavior, and clearly defined capabilities.

To put it simply, unlike traditional web crawlers that index documents, AI agents understand context and execute tasks.

Traditional Search Engine Crawlers vs. Agentic Web AI Agents

The key difference between old search engine crawlers and AI agents lies in their autonomy:

  • A search engine crawler visits your site, indexes content, and leaves.
  • An AI agent might visit your site, understand your product catalog, compare prices with competitors, and complete a purchase on behalf of a user. All without that user ever seeing your homepage.

Microsoft’s chief of communications has described this agentic web transformation as a shift from information retrieval to an action-oriented framework. Users issue high-level goals, while agents autonomously plan, coordinate, and execute.

“We envision a world in which agents operate across individual, organizational, team and end-to-end business contexts. This emerging vision of the internet is an open agentic web, where AI agents make decisions and perform tasks on behalf of users or organizations.” – Frank X. Shaw

This shift to an agentic web means that today’s websites need to expose their structure, intent, and actions in a way that autonomous AI agents can reliably understand and use. That’s why dual-interface design is necessary. It separates human experience from machine access, allowing websites to serve both without compromise.

What’s Dual-Interface Design in Agentic Web?

dual interface design for agentic web explained - Prerender.io

Dual-interface design is the website architectural approach that enables building AI-agent-friendly websites. The concept is straightforward: maintain rich visual experiences for humans while providing a parallel layer optimized for machine comprehension.

For human visitors, websites continue delivering intuitive navigation, compelling visuals, and engagement-focused experiences. For AI agents, the same website exposes content through semantic structure, explicit metadata, and programmatic web access points. Design leaders describe this transition as moving from UX (User Experience) to AX (Agent Experience).

Why Building AI Agent-Friendly Websites Matters Now

Building AI agent-friendly websites has become a business-critical priority. AI-powered systems now generate a significant share of all internet traffic, marking a fundamental shift in who, or what, consumes web content.

According to Imperva’s Bad Bot Report, AI agents account for 51% of all internet traffic in 2025. This means that more than half of your website’s visitors aren’t human, and that number is expected to increase in 2026.

And the implications extend beyond raw traffic numbers. Gartner predicts that by 2028, 33% of enterprise software will include agentic AI capabilities, with 20% of customer interactions handled by AI agents rather than humans. These agents don’t just browse, they book appointments, compare products, execute purchases, and complete complex workflows without human intervention.

How to Build AI Agent-Friendly Websites (7 Elements of Agentic Web)

1. Semantic HTML5 and Structural Design

Semantic HTML5 is foundational to building AI-agent-friendly websites because it defines how machines understand structure, intent, and hierarchy. Semantic elements such as <header>, <nav>, <main>, <article>, <section>, and <footer> act as architectural signals, helping AI agents determine what content matters, how pages are organized, and where actions begin and end.

A page built with generic <div> and <span> tags provides little context to a machine. When your website uses proper tags, AI tools instantly identify the most important content and where to find it.

Adopting semantic design means labeling every major page section with appropriate HTML elements. It also means using descriptive text for interactive elements, like “Download Full Report” or “Subscribe to Newsletter” rather than “Click Here.”

For icon-only buttons or complex interactive elements, ARIA (Accessible Rich Internet Applications) attributes provide invisible hints that tell machines what each element does, without affecting visual design.

Semantic structure establishes the foundation, but AI agents also need to discover, traverse, and reason about content across your site. That requires treating site architecture itself as part of your AI agent interface.

2. Site Navigation, Accessibility, and Discoverability

Beyond semantic tags, site navigation and architecture determine whether AI agents can reliably discover and traverse your content. In an agentic web, navigation is not just a UX concern—it is part of your machine-readable interface.

AI systems don’t browse visually. They follow links, parse the document object model (DOM), and consume structured signals through sitemaps and APIs. Well-designed navigation enables programmatic web access, improving both AI agent consumption and traditional SEO. This is a core pillar of dual-interface design, where human navigation and machine discovery coexist without conflict.

You can make your site more navigable for AI agents by:

  • Using descriptive URLs
    Ensure your page URLs are readable and meaningful to AI agents, not cryptic URLs with random IDs or parameters that can appear like spam and confusing AI agents. It’s better to include descriptive keywords in the URL (e.g. …/pricing or …/product/iphone15 instead of …/prod?id=1234).
  • Maintain an up-to-date XML sitemap
    An XML sitemap exposes your site’s structure as a machine-readable interface. Keeping it current allows AI agents and search engine crawlers to efficiently discover, prioritize, and revisit important pages. Any significant content change should be reflected in the sitemap to preserve reliable programmatic web access. You can use this free XML sitemap generator tool to get started.
  • Avoid frequent layout overhauls
    Structural stability matters for AI agents. Constant changes to navigation patterns, page hierarchy, or internal linking can break previously learned paths. When agents lose reliable access points, they may misinterpret content or temporarily drop your site from results while relearning the structure—an avoidable risk in dual-interface design.
  • Don’t hide critical content behind scripts or logins
    Many modern websites rely on heavy JavaScript, dynamic UI components, or multi-step interactions to reveal content. This creates barriers for AI agents, which often cannot execute complex client-side logic or navigate interactive flows.

Even with strong AI agent architecture, there’s a critical limitation: most AI crawlers cannot execute JavaScript. If your site depends on client-side rendering to expose core content, it may be invisible to AI agents—regardless of how well other machine-readable interfaces are implemented.

3. Prerendering JavaScript Sites for AI Visibility

Most AI crawlers cannot execute JavaScript. If your site is built with React, Angular, Vue.js, or any framework that relies on client-side rendering, AI agents will see a blank page. This means that your website (including product details, pricing, reviews, and all other content loaded via JavaScript) is completely invisible to ChatGPT, Claude, and Perplexity. don’t exist to these systems.

Dynamic rendering with Prerender.io solves this by converting JavaScript-heavy pages into static HTML that AI crawlers can read. When a crawler visits, Prerender.io detects it and serves a pre-rendered HTML snapshot, while human visitors continue receiving the full JavaScript experience—the perfect dual-interface design solution that an AI agent friendly website needs.

Prerender.io turns your site into an agentic web ready

The benefits of using Prerender.io when building AI agent-friendly websites include:

  • AI crawler compatibility: major AI search crawlers, such as ChatGPT, Claude, and Perplexity, access your content as clean HTML.
  • Structured data visibility: schema markup embedded in JavaScript applications becomes accessible to AI systems.
  • Faster JS content crawling: crawlers receive instantly loadable HTML, improving crawl efficiency.
  • 100% JS content indexing: AI crawlers can easily see and pull any JavaScript generated content and serve it to users.

Prerender.io works with all popular JavaScript frameworks and integrates at the CDN, web server, or application level. And you don’t need to change your techstack to adopt prerender. Simply finish the 3-step installation process and your website becomes more AI friendly. For JavaScript-heavy sites like e-commerce platforms, SaaS products, or single-page applications (SPAs), get started with Prerender.io now for free!

Learn how to ensure your site gets cited on ChatGPT and other AI powered search platforms.

4. Implementing Structured Data (Schema Markup)

If semantic HTML labels the rooms in your website “house,” structured data provides a detailed index of the contents. Schema markup uses standardized vocabulary from Schema.org that Google, Bing, ChatGPT, and other AI tools recognize.

Schema acts like a hidden label telling AI exactly what content represents—whether it’s an article, product, review, event, or FAQ. While visitors see only your meta title and description, AI agents see much more with proper schema markup.

Key schema types to consider:

  • Product: Pricing, features, availability, reviews
  • SoftwareApplication: Features, platforms, reviews (ideal for SaaS)
  • FAQPage and HowTo: Help AI agents pull answers directly into summaries
  • Event: Dates, times, locations, registration links
  • VideoObject and ImageObject: Metadata for media content
  • Dataset: Publication date, licensing, categories for research content

New to schema markup and needs more tutorials on how to add them to your site? Check out this structured data guide.

Implementation uses JSON-LD scripts embedded in HTML. Verify existing schema using Google’s Rich Results Test. Structured data helps AI agents read and understand content. But what if you want them to interact directly with your services? That requires APIs.

5. API-First Architecture for AI Agents

While structured data lets AI read content intelligently, APIs allow agents to interact with your functionality directly. Without APIs, agents must scrape your site, interpret inconsistent layouts, and guess at data meanings. With APIs, you provide direct access to product information, availability, pricing, and documentation. This approach is faster, more accurate, and more likely to be referenced by AI tools.

If you provide an API sharing your feature list, pricing tiers, or integrations, an AI agent responding to “What SaaS tools integrate with Notion?” could include your product—without the user visiting your homepage.

What to aim for:

  • JSON format: The standard data format AI tools prefer
  • Documentation: Clear guides explaining available data and request methods
  • Stability: APIs shouldn’t change without notice; use versioning
  • Security: Proper access controls managing permissions

Pro tip: document your APIs using OpenAPI specification to enable AI agents to dynamically discover endpoints and construct valid requests.

For organizations fully embracing the agentic web, the Model Context Protocol (MCP) represents the emerging universal standard for AI agent integration. Introduced by Anthropic in 2024 and adopted by OpenAI, Google, and Microsoft, MCP provides a standardized framework for connecting AI systems with external data sources, like a USB-C port for AI applications.

APIs and structured data handle technical infrastructure, but the content itself matters too. How you write and present information affects how well AI systems interpret and use it.

6. Content Optimization for AI (Natural Language Processing and Media)

It’s not just the back-end structure that matters. The way you write affects how well AI can interpret content. AI agents use natural language processing to derive meaning, so creating conversational interfaces that boost clarity matters.

Plain, unambiguous language is easier for AI to parse and cite. This doesn’t mean dumbing things down, it means:

  • Avoiding unnecessary jargon
  • Explaining concepts clearly
  • Structuring content logically.

For example, instead of “Event Details: Date: March 15, Time: 6 PM, Location: Main Hall,” use “Join us on March 15 at 6 PM in the Main Hall.” The second version is clearer and more likely to be correctly parsed by AI systems.

Clear content only helps if it’s accurate. AI agents that retrieve outdated information create problems for everyone, which brings us to data maintenance.

7. Ensuring Data Accuracy and Real-Time Updates

An often-overlooked aspect of building AI-friendly websites is maintaining accurate and up-to-date data on your site. AI agents may cache information or rely on data from their last visit. Stale content means AI might serve outdated information to users, so ensure you:

  • Sync with real-time databases: for volatile data such as inventory or pricing, retrieve from live databases whenever anyone, human or AI, requests it.
  • Provide timestamps: show last-updated dates on content pages or via API responses. AI agents can assess freshness, and some systems prioritize recent content.
  • Verify and monitor accuracy: audit exposed information periodically. Test queries using AI chatbots to verify your site returns accurate results.

Opening your site to AI agents creates opportunity, but also responsibility. As you make more data accessible, security and governance become critical.

Data Governance and Security for AI Agent Access

With greater data openness comes greater responsibility. Autonomous agents may interact with sensitive site areas when placing orders or accessing user data, so you need to put the following safeguards in place:

  • Authentication and authorization: APIs allowing transactions or non-public data access should require secure authentication (OAuth tokens, API keys). Distinguish between public endpoints and private actions.
  • Rate limiting and abuse prevention: AI agents operate faster than humans, risking system overload. Implement rate limits on APIs and consider bot-detection for website traffic.
  • Data encryption: use HTTPS everywhere. Encrypt stored data at rest.
  • Privacy compliance: if AI agents access personal data, ensure GDPR, CCPA, and other regulatory compliance. Document what data AI services can fetch.
  • MCP security considerations: security researchers have identified MCP concerns, including prompt injection risks, tool permission scope issues, and credential management challenges.

These principles apply across industries, though implementation priorities vary by business model.

Final Thoughts on Building AI Agent-Friendly Websites

AI agents are already navigating the web and making decisions for users. Optimizing for them is no longer optional.

If your SEO foundations are solid, focus on integrating APIs, structured data, and clear architecture so agents can retrieve information easily. If not, start with the basics: clean structure, semantic markup, and fast performance.

For JavaScript-heavy sites, one step makes an immediate difference: dynamic rendering with Prerender.io. Most AI crawlers can’t execute JavaScript, which means your content may be invisible to ChatGPT, Claude, Perplexity, and other AI platforms, even if you rank well on Google.

Prerender.io serves pre-rendered HTML to AI crawlers while delivering the full JavaScript experience to humans. It’s the fastest path to AI visibility for modern web applications.

Just as mobile responsiveness became essential a decade ago, AI agent readiness will define the next generation of successful digital platforms. Organizations that prepare now will thrive in an increasingly AI-mediated economy.

Ready to make your website visible to AI agents? Start with a free Prerender.io account and see the impact firsthand.

]]>
AI-Friendly vs. AI-Agent Friendly Websites Explained https://prerender.io/blog/ai-friendly-vs-ai-agent-friendly-websites/ Tue, 20 Jan 2026 07:55:00 +0000 https://prerender.io/?p=6740 The rapid rise of AI has pushed marketers and SEO teams from startups to enterprise companies to make their websites more AI-friendly as part of broader AI search optimization efforts. But AI’s role in search has evolved. It’s no longer limited to understanding, extracting, and presenting content.

Today, AI systems are beginning to execute tasks on behalf of users, such as comparing options, signing up for a service, or completing a purchase. This shift demands a more advanced approach to website optimization. The question is no longer whether AI can see and read your content, but whether it can trust your site enough to act on it.

In this guide, we’ll discuss the characteristics and differences between “AI-friendly” and “AI-agent-friendly” websites, show real-world examples across industries, and explain how to position your site for this next evolution of web standards with a future-proof website strategy.

TL;DR: How AI-Friendly and AI-Agent Friendly Websites Differ

If you strip this topic to its essentials, the distinction comes down to AI comprehension vs. AI recommendation—and ultimately, execution:

  • AI-friendly websites are designed to be understood by AI systems. They rely on clean content structure, schema markup, structured data, crawlability, and properly rendered, machine-readable content.
  • AI-agent friendly websites are designed to be used autonomously and acted upon by AI systems. This includes APIs, structured endpoints, predictable interactions, and secure authentication that support autonomous agent navigation.

Many websites today fall somewhere in between—but lean heavily toward AI-friendly at best. Very few are intentionally designed for agent interactions or autonomous workflows that power modern AI recommendation engines.

What is an AI-Friendly Website?

An AI-friendly website is one that AI systems can reliably discover through crawling, render without errors, parse into structured meaning, and interpret consistently across time. It’s similar to being “SEO-friendly,” but with a stronger emphasis on machine-readable content and AI interpretation rather than keyword rankings alone.

These websites excel at appearing in search results, AI-generated summaries, and knowledge bases. So when someone asks AI about your product category or topic, an AI-friendly design improves the chances of your content being surfaced and correctly represented.

Core Characteristics of an AI-Friendly Website

If your website meets the following criteria, it’s well-positioned for AI search optimization and AI-based discovery.

1. Content Clarity and Structure

Well-written, logically organized information with proper heading hierarchy (H1, H2, H3) and scannable content that makes your key points obvious.

2. Optimized Metadata and Schema Markup

Metadata tells AI systems what your page is about. Schema markup tells them how content elements relate to each other. Product schema, FAQ schema, article schema, OG tags, and organization schema all help AI systems place your brand within the correct context.

3. Rendered Content

Many AI crawlers have limited JavaScript execution capabilities. Ensuring content accessibility—whether through server-side rendering, static generation, or dynamic rendering solutions—helps guarantee that AI systems can extract and use your content.

4. Semantic HTML

Proper use of <article>, <section>, <nav>, and <aside> give AI systems clear signals about content purpose and relationships.

Resource: Learn how to get your website indexed by AI search engines

What is an AI-Agent Friendly Website?

An AI-agent-friendly website is built to support autonomous agent navigation, interaction, authentication, and task execution.

Instead of just reading your content, AI agents become traffic sources, making decisions, comparing options, booking appointments, and completing tasks with minimal human intervention.

Core Characteristics of an AI-Agent Friendly Website

The following signals whether your website is AI-agent friendly.

1. Programmatic Access

APIs, webhooks, and machine-readable endpoints enable agents to interact with your functionality directly. This includes proper authentication systems to ensure security and verify legitimate agent access.

2. Clear Action Pathways

Direct navigation and transaction flows with predictable UI patterns that agents can reliably follow and complete.

3. Simplified Interactions

Streamlined checkout processes, reduced form fields, and guest options reduce friction for AI-initiated transactions.

4. Confirmation Mechanisms

Machine-readable receipts, transaction IDs, status tracking, and structured error messages help agents verify completion or retry requests.

AI-Friendly vs. AI-Agent Friendly: Key Differences

Both website types serve different purposes and demand different technical approaches. Below is a comparison across the key areas.

AreaAI-Friendly WebsitesAI-Agent Friendly Websites
Main goalHelp AI understand what your site offersHelp AI complete what users want (e.g., buying a product)
Core functionInformation extraction and comprehensionTask execution and transaction completion
Operation typeRead-only (GET requests)Read-write (POST, PUT, DELETE requests)
Technical focusStructured content, clarity, semantic markup, renderingAPIs, documented endpoints, transaction workflows, authentication
Funnel positionTop-of-funnel (awareness, research)Button-of-funnel (decision, action)
User journeyAI extracts and presents information elsewhere (LLM summaries, answers)AI executes tasks directly on your platform (purchase, reservation, account change)
Success metricsVisibility, search impressions, accurate representation in AI responsesTask completion rate, successful transactions, conversion rate, authentication success
Example sitesBlogs, news, content hubsEcommerce stores, booking platforms, SaaS applications
Business impactDiscoverability and brand awarenessConversion and revenue generation

Real-World Examples of Both AI-Optimized Website Types

If you wonder what the differences are between an AI-friendly and an AI agent-friendly website in the real world, here are examples of industries already operating in these two categories.

Examples of AI-Friendly Websites

New York Times - an example of AIfriendly website
  • Media and content sites like The New York Times or BBC News lead because they use structured article markup (publish dates, bylines, categories) and maintain a logical content hierarchy with consistent topic tagging. That clarity makes them frequent inclusions in AI news summaries and trusted sources for retrieval.
  • Educational platforms like Wikipedia and Khan Academy take a similar approach through structured and factual content supported by internal linking and clean categorization. Their predictable format makes them easy for AI systems to parse and cite.

Examples of AI-Agent Friendly Websites

Shopify - an example of AI agent friendly website
  • Ecommerce platforms like Shopify stores (with proper API implementation) and Amazon enable agent interaction through robust APIs for product search, cart, and checkout flows. Their structured product data—SKUs, pricing, inventory—gives AI recommendation engines reliable information (positioning them for autonomous purchasing as adoption grows).
  • SaaS platforms like Stripe and Zapier were built for automation, offering extensive API docs, webhooks for real-time events, and clear authentication flows. AI agents can configure services, trigger tasks, and manage accounts with little friction.
  • Travel and booking platforms are more agent-friendly by necessity. Since bookings are structured tasks, availability is time-sensitive, and APIs are common, AI agents perform well here.

How to Make Your Website AI and AI-Agent Friendly

Phase 1 – The Foundation: AI-Friendly Analysis and Optimization

Before you can think about AI agents taking actions on your site, you need to audit your current state. This starts with a simple diagnostic assessment:

  • Run your important pages through AI crawlers like ChatGPT’s browsing mode or Perplexity, and evaluate the summaries they generate
  • Use Google’s Rich Results Test to evaluate your structured data
  • Check your site with various AI-powered search tools to see how you’re represented

Once you’ve identified the gaps, content optimization becomes your priority:

  • Aim for clear, concise content with purpose
  • Use descriptive headings that make sense on their own
  • Implement comprehensive Schema.org markup for your organization, products, articles, and FAQs
  • Ensure your content hierarchy is logical and scannable

With content in a better place, turn your attention to your technical structure:

  • Use clean, semantic HTML to help AI systems parse your pages reliably
  • Fix accessibility issues and ensure both humans and AI can read your content
  • Optimize your site speed and Core Web Vitals, as performance influences how AI systems evaluate and prioritize websites
  • Finally, ensure your meta tags are fully completed and consistent, using JSON-LD for structured data wherever possible

At this stage, the goal is simply ensuring your website is easy to access, understand, summarize, and trust. That’s the baseline for everything that comes next.

Phase 2 – The Transformation: Layer in AI-Agent Capabilities

Once your foundation is solid, begin developing agent-friendly features. However, you’ll need to approach this strategically based on your business model, resources, and timeline.

The first step is your API development:

  • Start with read-only APIs that expose your product catalog or service availability
  • Then, expand to transactional APIs that support cart operations, bookings, and checkout processes
  • Document these endpoints thoroughly using OpenAPI standards
  • Provide sandbox environments for safe testing without affecting live data

Afterwards, your API infrastructure must include:

  • Robust authentication systems (OAuth 2.0, API keys, etc.) to verify legitimate AI agents
  • Rate limiting to prevent abuse and manage server load
  • Request validation and input sanitization
  • Clear terms of service for programmatic access
  • Audit logging to track all agent interactions
  • Compliance with data privacy regulations (GDPR, CCPA, etc.)

As you build on these capabilities, simplify your conversion flows too:

  • Reduce form fields to the absolute minimum required
  • Offer guest checkout options wherever possible
  • Use predictable field names that AI systems can recognize and autofill reliably (e.g., “email” not “email_address_field”)
  • Also consider a single-page checkout experience to reduce the risk of navigation errors or abandoned workflows

Then, add natural language interfaces where appropriate:

  • Implement intelligent search that understands natural language queries
  • Deploy chatbots that can trigger backend actions, such as checking availability or initiating orders
  • For relevant markets, support voice-enabled interactions to further reduce friction

Finally, close the loop by creating machine-readable content (confirmations) for every transaction:

  • Structure success pages with clear data
  • Send email confirmations that include JSON-LD markup for easy parsing
  • Provide status-tracking APIs that allow agents to follow up on orders or bookings automatically

And when things go wrong—as they will sometimes—create clear, actionable error messages that specify what went wrong and how to fix it.

How Prerendering With Prerender.io Fixes Your AI Visibility Issues

Prerender.io makes your site AI agent friendly

Now, all of these optimizations above assume AI systems can already see your content in the first place. However, roughly 97% of modern websites rely heavily on JavaScript frameworks to deliver fast, interactive experiences.

While excellent for performance, they create rendering challenges for agents and crawlers that lack full JS execution capabilities. This means that when an AI bot visits your site, it may encounter a blank page, making your site invisible to AI systems regardless of how well you’ve optimized your content.

Thankfully, this is exactly the problem Prerender.io is designed to solve.

Prerender.io uses dynamic rendering to serve fully rendered HTML to AI bots and crawlers, while still delivering your JavaScript-powered experience to human users. In other words, AI systems get the same complete page a user would see, without needing to execute JavaScript themselves.

This process, together with the foundational optimizations above, transforms your site into an AI-friendly website with accurate comprehension and representation in LLM-generated answers.

For example, one happy Prerender.io user saw measurable traffic improvements within 30 days of adding ChatGPT user agent to their configuration. Read more clients’ success stories here.

Spike in ChatGPT bot visits to a website after adopting Prerender.io

Beyond visibility, Prerender.io also creates a stable environment for AI systems—consistent rendering for programmatic access, faster load times, and reliable content structure for AI parsing.

This comprehensive approach to AI search optimization ensures that your content is not only visible but also properly understood and actionable by AI agents. Learn more about Prerender.io’s processes and benefits.

And the best part is our dynamic rendering tool is plug-and-play, so there’s no need for you to change your tech stack. Simply install Prerender.io and experience the impact it brings to your site.

Start Optimizing Your Website for the Next AI Web Evolution

The transition to AI agent-friendly websites is happening gradually. While the technology exists today for AI systems to navigate, widespread user adoption is still building.

However, as users become more comfortable delegating tasks to AI assistants and as AI systems prove their reliability, we’ll see a gradual shift toward greater autonomy. Businesses that don’t compromise on human experience and build the right (AI) infrastructure now will be positioned to capture this traffic as it scales.

But the first step towards a future-proof website strategy is ensuring your content is visible to crawlers by using Prerender.io to eliminate the technical barriers that prevent AI crawlers from accessing JS-heavy sites. With this foundation in place and a strategic roadmap, your site will be ready for the next evolution of web standards.

Try Prerender.io today! Sign up for a free 30-day trial.

FAQs – AI-Friendly and AI-Agent Friendly Websites

1. Do I Need to Optimize My Website for Both AI-Friendly and AI-Agent Friendly?

It depends on your business model. All sites should start with AI-friendly optimization, so AI systems can find and accurately present your content. For publishers and blogs, that may be enough. However, if you’re an ecommerce platform, SaaS application, or the like where conversions matter, you’ll eventually need agent-friendly capabilities.

2. What Is the Difference Between AI Agents and AI Assistants?

AI assistants complete tasks with human guidance—answering questions, suggesting actions, or managing workflows when prompted (e.g., Siri, Alexa, Google Assistant, etc.). While AI agents act more autonomously. They can initiate actions, interact with services through APIs, and complete multi-step tasks with little to no human supervision.

3. Is It Safe to Let AI Agents Interact With My Website Through APIs?

Yes, it is safe to have AI agents interact with your website through APIs as long as you implement proper safeguards. The same security practices that protect human users—authentication, permissions, rate limits, and monitoring—apply to AI agents. With clear access controls, API-based interaction can be secure and auditable.

4. How Does Prerender.io Improve AI Search Visibility?

Prerender.io is a prerendering solution that converts JavaScript pages into crawlable HTML, making your content accessible for AI agents and crawlers. This ensures that your website can be found and accurately represented across all AI search platforms. Learn more about how Prerender.io boosts your AI search visibility.

]]>