SALT.agency® https://salt.agency Technical SEO Agency Tue, 17 Mar 2026 09:13:14 +0000 en-GB hourly 1 https://wordpress.org/?v=6.0.11 https://salt.agency/wp-content/uploads/2018/08/tech-seo-reporting-logo-150x150.png SALT.agency® https://salt.agency 32 32 Content quality and strategy matter more than ever in the age of AI https://salt.agency/blog/ai-content-marketing-strategy/ Tue, 17 Mar 2026 09:07:45 +0000 https://salt.agency/?p=18879917 You’ve read all the articles on AI adoption. You’ve seen the data on changing research behaviours and zero click search. You’ve probably experimented with a few AI tools yourself. The question isn’t whether AI is transforming content marketing. You already know it is. The question is whether your strategy will survive or thrive in this new environment. Adapting successfully will require more than a few tactical tweaks to familiar practices. Yes, you […]

The post Content quality and strategy matter more than ever in the age of AI appeared first on SALT.agency®.

]]>
You’ve read all the articles on AI adoption. You’ve seen the data on changing research behaviours and zero click search. You’ve probably experimented with a few AI tools yourself.

The question isn’t whether AI is transforming content marketing. You already know it is. The question is whether your strategy will survive or thrive in this new environment.

Adapting successfully will require more than a few tactical tweaks to familiar practices. Yes, you will need to optimise your content for AI discovery. But long-term success requires a deeper rethink of how content marketing works and what it is supposed to achieve.

For CMOs, the key shift is simple:

Content marketing must move from high-volume publishing to strategic influence.

The quantity versus quality debate is dead

When consumers can get what they need from an AI-generated response, or an AI Overview right at the top of Google’s search results, there’s far less reason for them to click through to your content.

The best response is to publish high quality content that AI can surface in relevant answers that also offers more value to those who click through.

Quality wins—because of course it does.

This isn’t just because in-depth, well-researched and highly creative content is better. Some of us have been shouting that message for decades. It’s that creating anything else has become a complete waste of time.

In the age of AI, one genuinely insightful, authoritative piece packed with useful information will be far more valuable than 100 generic listicles.

If there’s one thing generative AI systems like ChatGPT are extremely good at, it’s summarising readily available online information into easily digested roundups and lists. Your content can’t possibly win that fight.

Instead, you need to focus on what AI can’t do; demonstrating deeper expertise and authority. Your content needs to be so relevant, so original, so compelling and rich in information, that AI summaries and responses become teasers, not replacements.

Effective content requires strategy

Your content doesn’t just have to be great. It has to be effective. And that takes strategy:

  • First, to research and understand in detail the evolving search behaviours and research patterns of your ideal customers
  • Second, to identify topics and craft content capable of answering their every query better than anyone else
  • And third, to distribute and amplify your content so the right people discover the right information at the right time.

Creative, original, high quality content marketing is now more valuable than ever.

The real challenge for CMOs

Declining web traffic means senior leadership are likely to re-evaluate the value of content marketing—and the budgets that go with it.

The best response is to focus less on tactics (content creation and distribution) and more on how your content can influence purchasing decisions and drive conversions.

Most business decisions boil down to three things:

  • Productivity (output)
  • Efficiency (cost/speed) and,
  • Strategy (how it achieves a return).

Without a documented content strategy, senior leadership can only view content in the context of efficiency and productivity—neglecting (or misunderstanding) where the real business value lies.

Content marketing ends up being viewed as a cost centre rather than a strategic asset. The real value generation happens in sales. Content marketing simply feeds the funnel. And because cost centres constantly need to justify their budgets, the challenge becomes how to get more for less.

Why many content marketing strategies fail

Think about your existing content marketing practices. Intentionally or otherwise, how much of your approach is geared towards quantity rather than quality?

  • Content calendars built around what’s easiest to write (or outsource)
  • KPIs focused on vanity metrics instead of outcomes
  • Workflows that prioritise speed over craft

According to the Content Marketing Institute (CMI), only 29% of B2B content marketers rate their strategies as either very or extremely effective. The remaining 71% cite various reasons for not ranking their strategies higher, with a lack of clear goals the most common.

Forget noisy top-of-funnel content that seeks to entertain or outrage as wide an audience as possible. A viral hit in social media isn’t a business outcome. A bunch of clicks isn’t a business outcome.

Your executive leadership isn’t interested in pageviews and social shares. They want to know how the content impacts pipeline and revenue.

Your content marketing strategy must have a clear focus on influencing consumer decisions, nurturing searchers into prospects and then into customers.

Content must influence decisions

More than ever before, content marketing needs to be laser focused on what buyers actually need to know to make purchasing decisions.

Instead of creating top-of-funnel content designed to entertain the widest possible audience, organisations should focus on the questions buyers ask when evaluating solutions.

Content should help prospects move from curiosity to confidence.

That means addressing:

  • concerns and objections
  • practical use cases
  • real decision criteria.

Content becomes valuable when it reduces uncertainty and helps buyers move forward.

From cost centre to strategic asset

When content marketing is aligned with business outcomes, its role changes.

It stops being a publishing function. It becomes a strategic capability. Content can:

  • amplify brand visibility
  • demonstrate expertise
  • build authority in emerging markets
  • influence buyer behaviour long before sales engagement.

In this model, content marketing is not simply feeding leads into a sales funnel. It is shaping how buyers understand the market. And when that happens, content becomes a strategic investment rather than a marketing expense.

Is your content ready for AI discovery? Get in touch. Our expert team is ready to help.

The post Content quality and strategy matter more than ever in the age of AI appeared first on SALT.agency®.

]]>
The Non-Technical Marketer & SEO Overview to Google’s Universal Commerce Protocol https://salt.agency/blog/google-ucp-guide/ Fri, 13 Mar 2026 10:32:17 +0000 https://salt.agency/?p=18879912 Google’s Universal Commerce Protocol (UCP) is set to change how people discover and buy products online, which will fundamentally change how we approach SEO for eCommerce websites. Instead of sending users to a website to complete a purchase, Google now allows shoppers to buy products directly within its AI-powered experiences, including Search and Gemini. The […]

The post The Non-Technical Marketer & SEO Overview to Google’s Universal Commerce Protocol appeared first on SALT.agency®.

]]>
Google’s Universal Commerce Protocol (UCP) is set to change how people discover and buy products online, which will fundamentally change how we approach SEO for eCommerce websites.

Instead of sending users to a website to complete a purchase, Google now allows shoppers to buy products directly within its AI-powered experiences, including Search and Gemini. The entire purchase can happen inside Google’s interface.

For SEOs and ecommerce marketers, this means thinking beyond clicks and website traffic. The focus shifts towards enabling direct, simple purchases inside Google. Developers will manage the technical work, but marketers still play a key role. Your responsibility is to prepare the product data, make sure everything meets Google’s requirements, and help create a smooth journey from discovery to purchase.

Preparing Your Google Merchant Center

The first step is preparing your Google Merchant Center account. Before developers begin any technical work, your Merchant Center setup needs to be correct, because Google relies on this information to power the checkout experience.

Even though Google hosts the checkout, you are still the Merchant of Record. This means your store policies apply in the same way as they do on your own website. Customers need to see these policies clearly before they buy.

Start by reviewing your store policies and making sure they are easy to understand. Your return policy should clearly explain whether returns cost anything and how long customers have to return an item, such as a 30-day return window. You should also include a direct link to the full policy.

Customer support details must also be included. Google places a “Contact Merchant” link on the customer’s receipt, and this information comes directly from Merchant Center. For example, if someone buys a pair of shoes through Google’s checkout, they should be able to see straight away whether returning the item costs £5 and how they can contact your support team if there is a problem.

Optimising Your Product Feed for AI Commerce

After your policies are set up, the next step is improving your product feeds. UCP depends on structured product data, so you will need to include a few extra attributes.

Many marketers choose to do this through a supplemental feed in Merchant Center. This allows you to add new information without affecting your main shopping feed.

One important attribute is native_commerce. This works as the on-switch for UCP. When the value is set to TRUE, you are telling Google that the product can be purchased directly through its checkout system. If it is left blank or set to FALSE, the product will continue to send users to your website as normal.

Some products also require safety warnings or legal notices. These can be added using the consumer_notice attribute. If a product needs a warning, such as a choking hazard or a California Proposition 65 notice, it must be included so Google can show the warning clearly before purchase.

Another helpful attribute is merchant_item_id. Sometimes the stock-keeping unit (SKU) used in your marketing feed does not match the internal product ID used by your development team. This field allows you to link those identifiers together so the product remains consistent across systems.

For example, imagine you sell a wooden desk. In your supplemental feed you might set native_commerce to TRUE so the desk can be purchased through Google. If the product requires a Prop 65 warning about wood dust exposure, you would include that information in the consumer_notice field.

Identifying Products That Cannot Use UCP

Not every product can be sold through UCP. Some types of products are not supported and should be excluded.

Customised products are one example. Items that require personalisation, such as engraved mugs, cannot be handled through the standard checkout flow.

Subscriptions and digital goods are also excluded. This includes things like monthly coffee subscriptions, online courses, or in-game currency.

Age-restricted products such as alcohol, tobacco, or weapons must also remain outside the system.

For these products, the native_commerce field should be left blank or set to FALSE so they are not included in the direct checkout experience.

Choosing the Right Checkout Approach

Once your product data is ready, you will need to decide which checkout approach your developers should implement. There are two options available.

The first is native checkout, which is the option Google recommends. With native checkout, Google controls the entire checkout screen. Your systems send data to Google through APIs, but the customer completes the purchase inside Google’s interface.

This creates a very quick and simple buying experience. A user might ask Gemini to find a red cotton t-shirt. Gemini shows several options, including your product. The user selects “Buy” and completes the purchase in a couple of steps using their saved Google Pay details, without visiting your website.

The second option is embedded checkout. In this case, Google loads your own checkout page inside an iframe within the Google interface. The user still feels like they are inside Google, but the checkout process runs on your website.

This option is useful if your products require more complex choices before purchase. For example, a company selling custom-built PCs might need buyers to choose RAM, graphics cards, storage, and other components before completing the order.

Keeping the Customer Relationship

One concern with third-party checkouts is losing the connection with the customer. By default, UCP works as a guest checkout, meaning the buyer may not create an account with your store.

Google offers a solution called Streamlined Linking. Developers can use OAuth 2.0 to allow customers to link their Google account with an account on your store during checkout.

This happens within the Google interface, so there are no extra redirects or complicated steps. The customer enjoys a fast checkout, while your business can still capture customer data, apply loyalty points, and provide personalised offers later.

Managing the Post-Purchase Experience

Once customers begin purchasing through Google, your operations team needs to be ready to manage these orders properly.

Your business must enable Google Pay. This means registering in the Google Pay and Wallet Console and confirming that your payment service provider supports Google Pay transactions.

You also need a way to send order updates back to Google. Because the purchase happens inside Google, customers expect to receive updates there as well.

Your system should send automated webhooks whenever an order status changes. This includes when an order is created, shipped, or delivered. Updates should also be sent if an order is cancelled, refunded, or returned.

For example, when your warehouse prints a shipping label for a customer’s jacket, your system sends a webhook to Google. Google then notifies the buyer that their order from your brand has been shipped.

Preparing for AI-Driven Shopping

When your Merchant Center data is organised, eligible products are enabled for native commerce, and account linking works smoothly, your store becomes ready for this new type of shopping.

As Google’s AI continues to recommend products within search and conversational experiences, customers will move from discovery to purchase much more quickly. Businesses that prepare their systems and product data early will be in a stronger position to capture these fast buying moments.

The post The Non-Technical Marketer & SEO Overview to Google’s Universal Commerce Protocol appeared first on SALT.agency®.

]]>
If your content audit didn’t change decisions, it wasn’t an audit https://salt.agency/blog/content-audit/ Thu, 12 Mar 2026 06:20:21 +0000 https://salt.agency/?p=18879907 Most content audits fail. Not because the data is wrong, but because they don’t change decisions. They produce impressive spreadsheets yet often leave CMOs wondering what to actually do next. A content audit should do the opposite. It should clarify priorities, reduce uncertainty, and point to the actions that will improve performance. Yet in many […]

The post If your content audit didn’t change decisions, it wasn’t an audit appeared first on SALT.agency®.

]]>
Most content audits fail. Not because the data is wrong, but because they don’t change decisions. They produce impressive spreadsheets yet often leave CMOs wondering what to actually do next.

A content audit should do the opposite. It should clarify priorities, reduce uncertainty, and point to the actions that will improve performance.

Yet in many cases, decision-makers glance over the data, note a few key figures, and then close the file — never to reopen it again. The result is wasted time, effort, and budget for everyone involved.

The problem is simple: content audits aren’t just about collecting data. They’re about generating insights that lead to meaningful change. If an audit doesn’t help shape future decisions, it isn’t doing its job.

Here’s how to avoid the common pitfalls and ensure your content audit delivers the clarity needed to drive real progress.

What is a content audit?

A content audit is a review of the quality, performance, and relevance of all your content. It can be limited to a specific area on your website (such as the blog) or cover your entire content marketing initiative, including third-party platforms. The aim is to identify what to create, improve, keep, remove, or merge to improve performance.

Many content audits are data-driven and assess metrics such as page views, traffic, and backlinks. This can be useful for a top-level view of the best and worst-performing content, but it doesn’t tell the whole story.

Quantitative metrics help, but qualitative evaluation using a scorecard is often more useful. Content audits should be more than numbers, lists, and traffic data.

To create value, a content audit needs to suggest real change that influences future decisions for the better. The outcomes a CMO wants from a content audit are what needs to be done to reduce uncertainty, prioritise areas for investment, and protect long-term value.

Content audit frustrations

Have you ever reviewed a content audit that left you feeling overwhelmed, confused, and unsure of which direction to take next? It can feel more like you’re assessing a box-ticking exercise. It might look good but delivers little of real value.

These are the common frustrations CMOs can have with content audits that need to be addressed:

  • Receiving a large, incredibly detailed, and exhaustive content audit. Putting it together and trawling through the results can be time-consuming and tedious, and not particularly helpful if there are no clear outcomes or recommended actions
  • Being bombarded with data that’s descriptive but offers little direction. Figuring out how to interpret the data and where to start implementing change can be challenging.
  • Getting tactical findings and strategic outcomes that don’t align with business goals. Content audits with recommendations to boost vanity metrics without driving meaningful change offer little value and grow frustrations. If traffic and engagement (time on page and bounce rate) aren’t a big deal, why report on them when conversions are the focus?
  • No clear direction on where to invest. Being unable to retrieve clear pointers of where to focus content efforts and budget to cover gaps that help achieve business goals irritates CMOs.

What should strategic content analysis cover?

Professional content audit services cover more than just numbers and vanity metrics. Expert analysis should determine what content is working, what isn’t working as well as it could, and what you should get rid of.

A good audit cleans up a website or channel by identifying ineffective content and making it leaner when it’s not doing its job. Analysis should find any content debt and determine cost-effective, efficient actions.

Will it take more time to optimise or update content than writing something new from scratch? You need to determine whether old content still hits the current standards for traditional search, AI visibility, and user experience with structure, calls to action, subheadings, and linking.

Strategic content analysis isn’t just about reviewing what’s currently on site — it’s also about what’s missing. Identifying the gaps helps outline actions to support future decision-making.

A good audit should be the roadmap for where a business is right now with its content marketing, and for the planning required to get it where it needs to be. It should also include suggestions for things not previously considered, the hidden gems that can revive a strategy or breathe new life into a languishing social channel. The analysis and suggestions must be aligned with the business goals and KPIs to drive meaningful change.

Clear content audits create change

Clarity is key when delivering a content audit. How the results are communicated is just as important to ensure their outcomes are used effectively. They are often overly complicated when the CMO or decision-maker just wants clarity up front.

Not all CMOs will sift through the data, so a one-page synopsis or executive summary can help. Content audits should help address uncertainty, providing CMOs with the insight and data to decide what to do and the way forward. Explaining the probability of things happening, either good or bad, and opportunities for experimentation, helps. Ultimately, they make the final decision. But a content audit offers direction.

How can content audits introduce positive changes?

Having a templated audit process doesn’t always reflect the business’s goals or KPIs. To drive valuable change and align the aims of a content audit with the business, a kick-off meeting is required. before the project starts. This should set out what the audit will look at, its aims, and check that it’s what the business needs.

Depending on the goals, you might discover you don’t need to do an in-depth five-day audit, for example. A one-day audit could suffice, saving time and money while still delivering suggested changes. But a content audit should never be ‘quick’, as it would likely be lacking and make generic suggestions just for the sake of changing things.

Content marketers have an obligation to put a timeline against the audit recommendations, too. Outlining quick wins to do immediately, things that can wait, and actions that are nice to have but not essential helps. Using a traffic-light system with colour-coded priorities can also introduce short, medium, and long-term changes that can be planned and budgeted for over time.

How can a website content audit protect your budget?

A content audit sets the benchmark for where a business is currently and provides CMOs with data on where to allocate their budget and where the audience is underserved. It aims to identify where current investment can be improved or optimised.

Good content is both an investment and an asset. But an ageing content asset can easily become a liability. You might have blogs that still rank, drive traffic, generate leads, or advance the customer journey that an audit advises keeping.

However, you might also have older blog posts doing absolutely nothing. It might make more sense to remove it entirely, instead spending time and budget on content that no longer serves a purpose, or never did in the first place. Good content audits provide the data you need to make informed budget decisions. It provides a benchmark that helps you justify investment and clarify the direction you want to take.

Achieve clarity and confidence with a decisive content audit

The data in a content audit report isn’t the value — clarity is. This helps your business move forward with confidence, whether it involves adding, optimising, or removing content as part of your content strategy.

At SALT, our content experts work with you to assess your current site and conduct an audit that aligns with your business goals. Get in touch to discuss your content audit requirements.

The post If your content audit didn’t change decisions, it wasn’t an audit appeared first on SALT.agency®.

]]>
How Winedrops optimised growth around one surprising behaviour metric https://salt.agency/blog/growth-engine-one-metric-that-matters/ Mon, 09 Mar 2026 07:21:41 +0000 https://salt.agency/?p=18879899 Most marketing teams obsess over dozens of performance metrics. Customer acquisition cost. Click-through rates. Conversion rates. Lifetime value. But Jonny Inglis, co-founder of the wine platform Winedrops, says focusing on too many metrics can distract from the one thing that really drives growth. In a recent conversation for my Flipping the Playbook podcast, Jonny explains […]

The post How Winedrops optimised growth around one surprising behaviour metric appeared first on SALT.agency®.

]]>
Most marketing teams obsess over dozens of performance metrics. Customer acquisition cost. Click-through rates. Conversion rates. Lifetime value. But Jonny Inglis, co-founder of the wine platform Winedrops, says focusing on too many metrics can distract from the one thing that really drives growth.

In a recent conversation for my Flipping the Playbook podcast, Jonny explains how Winedrops built its marketing engine around a single behavioural signal: whether a customer places their first order in the first seven minutes of joining the platform.

“If a user doesn’t place an order in their first session, it’s very hard to change behaviour later,” he said.

That insight now drives everything from the company’s marketing campaigns to product design and customer onboarding.

Why scrappy marketing beats polished campaigns

One of the most surprising aspects of Winedrops’ growth strategy is its approach to advertising. Instead of investing in highly produced brand campaigns, the company leans heavily into scrappy, founder-led creative and user-generated style content.

The reason is simple: standing out in crowded feeds.

“Pattern interruption is one of the most important concepts,” Jonny said. “How do you create something that stands out against the noise?”

In the wine industry, many brands still rely on glossy imagery and traditional advertising formats. Winedrops deliberately takes the opposite approach, using informal content that feels more native to social platforms. That authenticity helps stop the scroll and capture attention in a space where polished marketing often blends into the background.

Testing at scale: the creative production engine

Another unconventional element of Winedrops’ marketing is the sheer volume of content it produces. The company launches hundreds of ads every week, testing dozens of creative concepts simultaneously.

“We might launch a couple of hundred ads and create 50 pieces of content every week,” Jonny said.

Rather than relying on internal opinions about what will resonate, the team lets the algorithms determine which creatives perform best. Performance data then feeds directly back into production. If ads focused on saving money drive the most efficient customer acquisition, the team produces more variations built around that theme. The result is a marketing engine driven by experimentation rather than assumptions.

The one metric that matters

Despite running large-scale experiments, Wine Drops ultimately judges success using a single core metric: First-day purchase rate. This measures the percentage of new users who place an order during their first session on the app.

The company discovered that customers who complete this action almost always become high-value users.

“They go on to have good behaviour,” Jonny said. “So, we don’t need to focus as much on long-term retention metrics.”

Instead of trying to optimise dozens of downstream indicators, the team focuses on improving the conditions that lead to that first purchase. This includes increasing daily active users, improving onboarding completion, and ensuring the product experience delivers on marketing promises.

Why cheap customers are often the wrong customers

Many brands try to reduce customer acquisition costs by offering aggressive promotions. But Inglis warns that this strategy can attract the wrong kind of customer. A promotion such as a free bottle of wine may drive sign-ups, but it also attracts bargain hunters who may never become paying members.

“You end up acquiring customers who don’t understand the product or don’t actually want it,” he said.

Instead, Winedrops designs its offers to attract customers who already buy premium wine. Discounts still provide strong value, but they also pre-select customers more likely to become long-term members. The result is a higher quality customer base, even if the acquisition cost appears higher upfront.

Building growth through customer advocacy

Beyond paid advertising, Winedrops has also leaned heavily on customer advocacy. The company maintains direct communication with some of its most engaged customers through small WhatsApp communities. These groups allow the founders to gather feedback quickly, test new ideas, and identify problems before they escalate.

“It’s completely priceless for a founder,” Jonny said. “You get feedback faster than your customer service team.”

These communities also reinforce the trust and loyalty that drive referrals and word-of-mouth growth.

The power of focus

Another lesson from Winedrops’ growth journey is the value of concentrating on a single marketing channel. Despite constant opportunities to experiment with new platforms, the company has largely focused its efforts on Facebook advertising. The logic is simple: Mastering one channel often produces better results than spreading resources across many.

“You can get to very large revenue by getting really good at one channel,” Jonny said.

For Winedrops, that focus has allowed the team to refine its creative testing system and optimise campaigns at scale.

What comes next for Winedrops

The company is now expanding internationally, with the United States representing its most significant growth opportunity. In the US market, the business operates under the name CaseDrops, and has already seen rapid early traction.

Jonny attributes this growth to a strong fit between the membership model and American consumer behaviour. Membership-driven retail concepts, such as Costco, are already widely accepted in the US. That familiarity makes the value proposition easier for customers to understand.

The bottom line

The marketing playbook used by Winedrops challenges several common assumptions:

  • Growth does not require perfectly polished campaigns
  • Testing hundreds of creative variations can outperform carefully planned campaigns
  • The most powerful growth lever may be a single behavioural signal.

For Winedrops, that signal is simple: getting customers to place their first order as quickly as possible. Everything else flows from there.

“If you understand the behaviour that creates long-term customers,” Jonny said, “You can design your marketing and product around that.”

Want to hear the full conversation?

Listen or watch the full discussion between Jonny Inglis and Reza Moaiandin on the Flipping the Playbook podcast on Spotify or Apple Podcasts to learn how Winedrops built a growth engine powered by experimentation, customer insight, and one metric that matters.

The post How Winedrops optimised growth around one surprising behaviour metric appeared first on SALT.agency®.

]]>
Content in the AI era: Why distribution and amplification matter more than ever https://salt.agency/blog/ai-content-amplification-strategy/ Thu, 05 Mar 2026 09:32:33 +0000 https://salt.agency/?p=18879890 AI is now embedded in how customers research, evaluate and compare brands, and CMOs are already restructuring teams, workflows and measurement models around that reality. Marketers are feeling the impact most acutely in content marketing and SEO. If there’s one thing just about everyone agrees on, it’s this: Original, high-quality content is essential. Great content has always been important, and some of us have been […]

The post Content in the AI era: Why distribution and amplification matter more than ever appeared first on SALT.agency®.

]]>
AI is now embedded in how customers research, evaluate and compare brands, and CMOs are already restructuring teams, workflows and measurement models around that reality. Marketers are feeling the impact most acutely in content marketing and SEO. If there’s one thing just about everyone agrees on, it’s this:

Original, high-quality content is essential.

Great content has always been important, and some of us have been banging the drum for quality over quantity since the dawn of content marketing as a discipline. Even so, it’s still nice to be vindicated at last. Now, in the AI era, quality is no longer just a differentiator. It is the minimum requirement for visibility.

But even publishing great content isn’t enough. The internet is littered with brilliant content no one ever sees (an eternal frustration of marketers everywhere who are desperate to prove ROI).

Instead, a successful content marketing strategy stands on three pillars:

  • Quality content: Original, valuable assets targeted to the needs of your audience
  • Distribution: Getting your content to existing audiences via owned channels (email, social media, etc.)
  • Amplification: Helping unknown prospects discover your content (search, paid advertising, etc.)

These pillars are even more critical in the age of AI, as consumers increasingly use GenAI platforms like ChatGPT and Perplexity to research brands and products.

While traditional content marketing focused on publishing and distribution, in the AI era, amplification plays a new role: ensuring content remains visible in AI-generated discovery environments where traffic and attribution are increasingly fragmented.

AI adoption is  happening, but  it’s  uneven

Let’s look at AI adoption for a moment. Despite the noise, AI adoption is not yet universal. The UK Government’s 2026 AI Adoption Research found that only 16% of UK businesses are currently using AI, with a further 5% planning adoption. Nearly 80% are not yet actively using AI in operations.

Among adopters, the majority use AI for text generation and natural language processing, while only a small fraction use more advanced or agentic systems. Crucially, 84% of AI-using businesses still apply human oversight to outputs.

Globally, adoption is rising but remains far from saturated. The Microsoft AI Diffusion Report 2025 H2 estimates that around 16% of people worldwide use generative AI tools, with adoption growing steadily but unevenly across regions.

The implication for CMOs is clear: AI is influencing buyer behaviour, but organisational maturity remains mixed. Governance, amplification, and strategic integration are inconsistent. That creates both risk and opportunity.

AI search is reshaping discovery

Consumers increasingly use generative AI tools such as ChatGPT and Perplexity to research brands and products. Meanwhile, Google’s AI Overviews and generative search features are reshaping how information is surfaced.

Research from Bain & Company suggests that roughly 60% of searches now end without a click to a website. Zero-click search and AI-generated answers mean more buyers are gathering information without ever landing on your site.  Traditional attribution models struggle to account for this shift.

But this does not mean your content is losing value. If an AI Overview or ChatGPT response references your content, you have still influenced perception. You may have shaped preference. You may have accelerated the sales cycle. You may have reduced future customer acquisition costs by building early trust. You just cannot measure that influence using legacy traffic metrics alone.

For CMOs, the challenge is not only adapting distribution and amplification strategies to AI search. It is evolving measurement models to reflect how influence now works.

Here’s how to do it:

Phase 1: Benchmark your AI visibility

Before you reinvent your strategy and start optimising your content, you need to examine and benchmark your brand’s current visibility in AI search.

Gather intelligence

  • Review your last 100 leads to see which (and how many) content pieces they touched before converting.
  • Ask your sales and support teams to identify common questions from prospects and customers on calls.
  • Interview typical customers to find out what questions, concerns, and objections they usually encounter at each stage of the buying journey.

Audit  your AI brand mentions

  • Test your AI visibility by tracking how often your brand appears in AI-generated responses related to these customer queries, plus other industry or key topics.
  • Track your competitors’ AI mentions and compare the results.
  • Identify content gaps where your brand’s content and expertise should be represented but isn’t.

TIP: 

It’s not currently possible to capture data on genuine user queries, conversations and clicks in GenAI platforms like ChatGPT. However, monitoring tools like Peec AI and Otterly.ai run test queries to simulate your brand’s likely visibility in the various AI platforms.

Phase 2: Optimise your content

Your content needs to work for humans and machines simultaneously. Work with your SEO and content teams on the following:

Improve your content structure

  • Add schema markup to key pages. While invisible to LLMs, schema can still have an indirect influence via search and other third-party resources LLMs draw upon.
  • Structure the information and ideas within long form content as self-contained “chunks” that also work as standalone answers or snippets, ready for AI to extract.
  • Implement FAQ sections or adopt a Q&A format where possible, making it easier for AI systems to extract clear answers to specific user queries. Focus on structuring your Q&As as follow-up questions to emulate the way people interact with AI systems.
  • Ensure fast loading times and clean HTML structure for efficient AI crawling.
  • Organise your headings logically (H1, H2, H3) so that AI can correctly follow the hierarchy of information.

Amplify your authority signals

  • Develop consistent brand messaging across all your content.
  • Publish original proprietary research containing relevant insights AI systems can’t get from any other source—so they have to reference your content.
  • Build authority around topic clusters rather than targeting individual keywords.
  • Differentiate your content by including expert quotes and original insights exclusive to your brand.
  • Strengthen your content’s credibility with rigorous fact-checking to maintain a high level of factual accuracy backed by trusted authoritative sources.
  • Zero in on what your target audience cares about and what’s going to make them convert once they arrive on your page.

TIP: 

Look for ways to diversify your content. Experiment with different formats (video, audio, interactive) to see how each impacts AI discovery differently. You can also publish favourable product comparisons or create conversational content designed to mirror the kinds of questions and phrases people are likely to use in AI prompts

Phase 3: Measure what matters in the AI era

Traditional metrics tell an incomplete story. The usual attribution models can’t track zero-click interactions in AI-generated answers and AI Overviews. However, you should still capture information on those website visitors who do click through.

Set up your tracking tools

  • Separate LLM referral traffic from general referral traffic in Google Analytics.
  • Implement Urchin Tracking Module (UTM) codes across all your content channels to identify the source of each click. (NB: While ChatGPT supports UTM tracking, not all AI platforms currently do.)
  • Monitor LLM bot crawling of your website using SALT’s new Cloudflare methodology.

Update your KPI dashboard

  • Track mentions in AI-generated responses to monitor how often your brand appears in answers relative to your competitors.
  • Expand share of voice metrics to include your presence in Google AI Overviews etc. for industry-relevant searches.
  • Assess the quality of AI-driven traffic by focusing on engagement metrics such as time-on-site and scroll depth, rather than volume.
  • Compare performance across AI models, such as ChatGPT, Claude, Perplexity, Copilot, and Google AI Overviews.

The way ahead

Content marketing in the AI age isn’t about abandoning everything that came before. It’s about adapting and extending your approach to incorporate AI search.

Distribution and amplification haven’t become less important; they’ve become more sophisticated.

The most effective content strategy would be to invest in long-term, evergreen assets packed with value and topical depth. When properly optimised for AI discovery, high quality and strategic content will only grow in value.

Ready to understand how AI systems are consuming your content?

If you’re ready to go beyond legacy metrics and finally understand when, where, and how AI systems consume your content, SALT.agency can help.

Our Cloudflare-powered AI visibility solution shows you exactly when generative AI bots access your site, how often your content is referenced in AI responses, and where your brand appears or gets omitted in AI-generated discovery. That insight feeds smarter amplification strategies, tighter governance, and measurable impact on pipeline and CAC.

If you want to see exactly when AI consumes your content, and turn that visibility into growth,  get in touch.  

The post Content in the AI era: Why distribution and amplification matter more than ever appeared first on SALT.agency®.

]]>
Third party platforms are a risky foundation for your content moat https://salt.agency/blog/3rd-party-content-platforms/ Tue, 03 Mar 2026 14:43:16 +0000 https://salt.agency/?p=18879860 AI crawlers and foundation models are quickly becoming the primary way information is discovered, summarised, and acted upon, often before anyone even sees a traditional search result. For brands and publishers, the real question is no longer just, “Can people find my site in Google?” It is now: “Do AI systems have permission to access, […]

The post Third party platforms are a risky foundation for your content moat appeared first on SALT.agency®.

]]>
AI crawlers and foundation models are quickly becoming the primary way information is discovered, summarised, and acted upon, often before anyone even sees a traditional search result.

For brands and publishers, the real question is no longer just, “Can people find my site in Google?” It is now: “Do AI systems have permission to access, interpret, and use my content in the first place?” When you map which major third-party platforms allow or block specific AI bots, you start to see who is quietly influencing training data and who is slowly fading from the AI-mediated web.

At the same time, many creators and businesses have built their content and audience relationships on “rented land”. Social networks, creator platforms (like Medium and Substack), and SaaS tools sit between you and your audience. If those platforms decide to block, throttle, or aggressively license their data to AI vendors, your visibility, entity signals, and reach can shift overnight. This research turns a vague platform risk into something measurable.

For CMOs, this is not a technical nuance. It is a revenue question. If AI systems cannot access, interpret, or trust your core content, your brand risks disappearing at the very moment prospects are researching solutions – long before they reach your site, your sales team, or your paid channels.

You can clearly see which hosts support or restrict your presence in AI systems, how that affects entity building and audience ownership, and then make intentional infrastructure decisions rather than leaving them to platform policy changes.

So we did a little experiment

To explore this, we reviewed 34 platforms where you can host content outside your own domain. We cross-referenced them against 28 commonly identified AI user agents, alongside CommonCrawl (CCbot).

Analysis of third-party content platforms blocking AI crawlers

And here’s what we found

  • 13 websites block at least one AI crawler.
  • 7 block CCbot, and all 7 of these also block AI crawlers, creating a 100% overlap.
  • 21 sites do not block any AI crawlers.

Grouped by content site type:

Category Sites in group Avg AI bots blocked Min Max
Publishing 23 1.13 0 7
Social 7 11.71 0 18
Tooling 4 0.25 0 1

Across the third-party content platforms analysed, the most frequently blocked AI crawler was ClaudeBot (29.4%), followed by GPTBot (23.5%), Google‑Extended (20.6%), Bytespider (20.6%), and CCBot (20.6%).

RAG vs fine-tuning, and why blocking certain bots matters

Retrieval-augmented generation, or RAG, allows a language model to fetch information at query time. Training data, by contrast, is what the model originally learned from during pre-training or fine-tuning.

With RAG, the model connects to an external knowledge source such as documents, databases, or a search index and pulls relevant snippets into the prompt before generating an answer. The model’s internal parameters remain unchanged. To keep answers current, you update the external knowledge base instead of retraining the model.

Training data works differently. It modifies the model’s internal weights, embedding language patterns, facts, and behaviours directly into the system. Updating what the model “knows” in this way requires another training or fine-tuning cycle, which is slower and becomes outdated as the world changes. This approach works well for stable behaviours, domain terminology, or stylistic patterns, but it struggles with fast-changing information.

In practical terms, RAG means bringing documents to the model at run time, giving you a flexible and inspectable knowledge layer outside the system itself. Training data means embedding knowledge directly into the model, creating stronger internal fluency, but leaving that knowledge static and opaque once training ends.

Most modern systems use both: a robust base model trained on broad datasets, combined with RAG to inject fresh, organisation-specific, or time-sensitive information during responses.

This distinction has real implications. You can block AI crawlers used for RAG and still allow CCbot, meaning your content may appear in large language models through training data. The reverse is also true. Models may also learn about your brand through third-party platforms that you do not control, even if you restrict direct AI access to your own domain.

AI training and data control risks

In most cases, you have limited control over how content on platforms is used for AI training. Prompts, posts, and engagement data may be absorbed into models and are effectively impossible to remove later. If a platform updates its AI policies or changes how it handles robots.txt, your content could suddenly become more or less accessible to crawlers and models without any direct input from you.

Entity building and knowledge graph signals

Entity SEO depends on consistent, controllable signals such as structured data, organisation markup, and sameAs links across multiple sources to clearly define and reinforce your brand as a distinct entity.

When your primary content lives on third-party domains, their entity is strengthened more than yours. Platforms like Medium, LinkedIn, or Substack accumulate the co-occurrence signals, links, and contextual associations, while your own domain plays a smaller role in the knowledge graph.

If a platform changes URL structures, restricts features, adds paywalls, or shuts down entirely, you lose entity signals that you cannot easily migrate. Backlinks, internal linking paths, and historical context often remain trapped within that ecosystem.

Audience ownership and distribution fragility

You do not truly own follower lists or engagement graphs in a portable format. Moving an audience off-platform later can be expensive and limited by conversion rates.

Each transition requires convincing people to take another step, whether that is subscribing, signing up, or following elsewhere.

Organic reach continues to decline as platforms prioritise monetisation and their own AI features. You may find yourself paying an ongoing attention tax to reach an audience you originally built, competing within feeds that increasingly favour platform interests over creator visibility.

A more resilient model: hub-and-spoke

A more resilient long-term strategy is the hub-and-spoke model. Use third-party platforms as discovery channels that direct people towards assets you control, such as your primary domain, email list, or community space, rather than treating them as the permanent home of your core content.

Keep canonical, structured, and evergreen material on your own domain. This gives AI agents and search engines a stable, brand-owned reference point to connect mentions, entity signals, and training data back to you.

Where possible, design or negotiate data practices that limit how deeply vendor AI systems can retain or reuse your audience and behavioural data over time.

The goal is not to abandon large platforms. Exposure on Medium, LinkedIn, and Substack can increase growth and discovery. The key is balance, leveraging their reach without surrendering control of your content, your entity footprint, and your long-term audience relationship.

SALT.agency can audit your AI crawler exposure, entity signals, and platform dependencies – and show you where you are exposed, where competitors are ahead, and what to fix first. Get in touch.

The post Third party platforms are a risky foundation for your content moat appeared first on SALT.agency®.

]]>
How to effectively govern AI use in content teams https://salt.agency/blog/how-to-effectively-govern-ai-use-in-content-teams/ Tue, 03 Mar 2026 08:25:59 +0000 https://salt.agency/?p=18879854 AI use in content marketing is here to stay. So, what’s the plan? The honeymoon phase of AI is over, and what was initially a shiny new tool is now a strategic conundrum. CMOs are grappling with questions from the board, CEOs, and non-execs alike, including: What is our AI strategy? How is it being […]

The post How to effectively govern AI use in content teams appeared first on SALT.agency®.

]]>
AI use in content marketing is here to stay. So, what’s the plan?

The honeymoon phase of AI is over, and what was initially a shiny new tool is now a strategic conundrum. CMOs are grappling with questions from the board, CEOs, and non-execs alike, including:

  • What is our AI strategy?
  • How is it being used in our marketing?
  • What risks does it introduce?
  • Where does responsibility sit?

The uncomfortable truth is that, in many organisations, AI has already been well and truly wedged into content workflows before these questions were asked. Teams are experimenting independently, and new AI tools are being adopted without proper processes in place in an effort to save time and meet demand.

That kind of unregulated use is more than enough to spike the blood pressure of compliance teams. However, the real risk isn’t AI itself, but rather its unmanaged adoption.

Here’s why AI in content marketing should be treated first and foremost as a governance issue, rather than a technology one.

The problem isn’t AI – it’s the absence of guardrails

When AI use is left unchecked, it amplifies problems rather than solving them.

AI can churn out content at breakneck speed. But if it isn’t used with care, it can also dilute your brand voice and normalise mediocrity at the same pace. It can introduce factual inaccuracies and generic, cookie-cutter insights into content that’s supposed to build trust and authority. Most concerningly of all, it can do this quietly and at scale, with issues compounding over time before anyone realises there’s a problem.

But banning AI outright is neither realistic nor strategic. According to an Orbit Media study, AI adoption among content marketers has grown from 65% to 95% since 2023. Almost all marketers are using AI in some capacity, whether the C-suite likes it or not.

AI use in content marketing is now so widespread that organisations that attempt to ban it entirely will simply drive it underground. Teams will continue to use it anyway, but without the oversight and accountability that comes with managed and regulated AI use.

For business leaders wondering whether AI can really improve their workflows, the question isn’t whether to accept its adoption – that decision has already been made in practice. Instead, they should be looking to guide its use in a way that increases efficiency while preserving content quality and brand value.

Why boards are asking the wrong question

Many boards approach AI through a solely commercial lens. They ask how much time it can save, how many roles it might replace, or how it can reduce operational cost. In fact, research by the World Economic Forum found 40% of employers expect to cut their workforces in favour of AI automation. But in content marketing, this framing is dangerous and opens brands up to potential long-term problems.

When AI is positioned purely as a cost-cutting shortcut, the people most often affected are entry-level and mid-career professionals. These are the roles where skills are learned and judgement is developed. If AI replaces those positions, there’s no longer a clear path for developing the next generation of leaders. You can’t promote an AI agent into a manager role (for now at least).

There’s also the issue of content quality. When AI is running the show, content is produced faster and cheaper, but its quality erodes over time as short-term efficiency gains are prioritised over long-term trust.

However tempting it might be to boards scrutinising their balance sheets, AI shouldn’t be treated as a replacement for content teams – it should be used as a tool that helps those teams do better work.

AI risk is higher in content

Not all marketing activity carries the same level of risk from AI adoption – content is uniquely exposed.

Content shapes brand perception, informs buying decisions, and signals expertise. Once published, it lives on indefinitely in its own tiny corner of the internet. This is why content that defines thought leadership and strategic positioning should never be fully outsourced to AI.

In a recent essay for The New York Times, Meghan O’Rourke, executive editor of The Yale Review, describes how prolonged use of generative AI began to interfere with her own thinking and creative judgement. Over time, it blurred the line between assistance and authorship, leaving her feeling detached from the work she produced. Anyone who regularly uses LLMs for editorial support will likely be familiar with that feeling.

This matters because audiences still want to feel they are engaging with a subject matter expert. Trust is built through opinion, nuance, and authenticity, and those qualities are uniquely human. AI-supported content can contribute to authority in some cases, but ultimately, people want to know that a real person stands behind an idea.

There’s also a practical search consideration. Expert-written content supports E-E-A-T signals, particularly Experience and Expertise, which search engines increasingly rely on to assess credibility. Content that reflects lived experience and original insight is far harder for AI to replicate convincingly.

Where AI adds real value when governed properly

None of this means AI should be excluded from content. In fact, it’s quite the opposite.

When used responsibly, AI is extremely powerful, but its real strength doesn’t lie in creativity. AI performs best when supporting preparation and production, not editorialising or decision-making. It excels at tasks that are time-consuming but low-risk.

The key is to provide clarity about where AI belongs in the content process. It’s most valuable when supporting with tasks like:

  • research and background synthesis
  • consolidating large data sets
  • summarising reports and extracting insights
  • transcribing interviews and meetings
  • turning data into charts and visual assets
  • structuring outlines and frameworks
  • generating supporting copy such as newsletter summaries or alt text
  • developing headline variations
  • suggesting visual or multimedia formats.

In each of these cases, AI speeds up the planning process and helps teams get past that tricky stage of blank-page paralysis in record time. It isn’t being asked to provide anything truly creative or original, it’s simply freeing up more hours for humans to take care of that themselves.

Three key AI content decisions every organisation must make

Effective AI governance in content should start with three simple questions.

1. What must always remain human-led?

Most organisations have concluded that anything that defines brand voice, strategy, or market positioning must stay human-led.

This includes:

  • core brand narrative and positioning content
  • thought leadership and opinion pieces
  • executive communications
  • strategic messaging frameworks
  • original research and insight-led content
  • editorial judgement and final sign-off.

Human leadership in these areas is non-negotiable.

2. What can be AI-assisted under supervision?

Many parts of the content process can be safely AI-assisted, provided there is clear oversight.

Support, summarisation, reformatting, and repurposing all fall into this category, but outputs should always be reviewed and refined by a human. Early AI-assisted drafts are fine, but they should be fully rewritten to match the author’s voice, your brand’s positioning, and the strategic intent behind the piece.

Standardised guidelines and processes make all the difference here. AI is used solely to increase efficiency, while humans remain responsible for quality control and final approval.

3. What requires central governance?

Without clear policies from the top down, AI use within an organisation fragments quickly. Teams and individuals use their personally preferred AI tools and develop their own ways of working, making governance and cohesion impossible.

To implement AI safely and effectively, businesses need to provide rulesets around approved AI tools, usage logging, risk management, and quality control processes. This sets expectations that scale, rather than micromanaging every prompt.

Central governance ensures consistency, protects the brand, and gives teams confidence and clarity over what approved AI usage looks like.

Governance might slow you down, and that’s ok

A common worry about governing AI usage is the risk of slowing teams down. But in some ways, that’s exactly the intention.

Some AI tools won’t be approved, certain shortcuts might take a while to be implemented, and AI-enhanced workflows may require additional oversight. That friction isn’t necessarily a bad thing.

AI is still in its infancy, and legislation, such as the EU’s landmark AI Act, is only just starting to catch up. For now, organisations must self-regulate. That means being very clear about what information can and can’t be used in AI tools, how they should be used, and where the risks lie for brand authority and customer trust.

From a safety perspective, governing in this way protects sensitive company data and intellectual property. From a quality perspective, it forces teams to consider which tasks are best suited to AI-assistance, and which should remain human-led. These crucial safeguards far outweigh any potential efficiency drawbacks.

How CMOs should navigate AI in content marketing

In recent years, many marketing teams have still been in experimentation mode with AI. But that phase is now ending, and as AI becomes increasingly embedded in content workflows, the risks associated with its unmanaged use compound.

For CMOs, AI shouldn’t be something to fear. Use it deliberately and often. Build a clear understanding of the areas where it genuinely helps your team. But never use it to replace human judgement in the content that defines your brand’s voice, credibility, and trust.

As the AI juggernaut rolls on, organisations must move from reactive experimentation to structured governance. The question is no longer whether to use AI in content marketing, but rather how effectively and responsibly you are governing its use.

Can AI truly make content teams more effective? Absolutely – but only when everyone is clear on the rules of engagement.

Let SALT.agency help with your content marketing strategy

Looking to future-proof your content strategy in the age of AI? Our content marketing services help brands earn attention, build authority, and get target audiences talking to you. If you need support with your content strategy, get in touch with our expert team today.

The post How to effectively govern AI use in content teams appeared first on SALT.agency®.

]]>
From rankings to references: How to structure content for AI search https://salt.agency/blog/ai-overview-optimisation/ Thu, 26 Feb 2026 07:36:36 +0000 https://salt.agency/?p=18879830 In the first wave of AI search, most of the conversation focused on what is changing. Now the more important question is how to respond. AI search does not replace SEO, but it does change what visibility means. You are no longer optimising just for rankings. You are optimising for inclusion. AI systems increasingly deliver […]

The post From rankings to references: How to structure content for AI search appeared first on SALT.agency®.

]]>
In the first wave of AI search, most of the conversation focused on what is changing. Now the more important question is how to respond.

AI search does not replace SEO, but it does change what visibility means. You are no longer optimising just for rankings. You are optimising for inclusion.

AI systems increasingly deliver answers before users ever click. That means if you want to be seen, your content must be structured in a way AI systems can find, interpret and reference. That shift sits within a broader rethinking of discoverability – one that goes beyond traffic and rankings towards long-term presence across surfaces.

This is not about producing more content. It is about producing answer-ready content. If AI Mode represents the strategic shift in search, structuring answer-ready content is the practical response.

AI delivers the answer, not just the link

Traditional search pointed users toward destinations (i.e., your content). AI systems increasingly become the destination. They read your content, digest it, and present a direct summary. Sometimes they cite you. Sometimes they don’t.

That shift changes the objective.

You are no longer writing solely for Google’s ranking systems. You are shaping material that AI models must be able to interpret, trust and surface confidently. The goal is not ranking (or at least, not only ranking), it is being referenced. These aims are not mutually exclusive. In fact, strong rankings and credible inclusion tend to reinforce one another.

But inclusion requires more than optimisation for crawlers. It requires packaging your expertise in a way AI systems can process and validate.

This is no longer about keyword density. It is about clarity, structure and trust. AI systems do not simply match keywords. They validate information against trusted sources and explore related sub-questions before constructing an answer.

Structure is part of strategy

The most effective AI-friendly content breaks down into “chunks” that map to real questions. Think short, self-contained answers with clean structure. The idea is simple: your content answers the questions your audience is already asking.

That requires moving beyond broad, keyword-led pages toward focused, clearly structured responses. AI systems prioritise content they can interpret quickly and confidently.

Formats such as FAQs and how-to articles, marked up with appropriate structured data, help clarify what your content is about and when it should be surfaced.

But your content’s schema markup is not a shortcut. It is a signal. It provides a clearer roadmap to your information, improving the likelihood that your answers are understood and surfaced appropriately.

Structure alone, however, does not create authority. Authority requires conviction and clarity of perspective. That’s something many brands struggle with.

Accessibility, consensus and entity

If AI visibility is about structure and trust, then three pillars support it: Accessibility, Consensus and Entity.

Accessibility

Make your content easy for both humans and machines to access and interpret.

  • Prioritise clean, crawlable URLs and documents
  • Mitigate misinformation by managing what’s publicly accessible
  • Ensure strong search engine visibility and your content can serve as a source for Large Language Models (LLMs)

Accessibility ties directly to the technical SEO foundations you already invest in. This is not a new discipline. It is the SEO you have always done, but with a broader visibility outcome.

Consensus

Aim to shape the consensus, not just echo it.

  • Publish content with a clear, human perspective that aligns with trusted sources.
  • If you want to change the narrative, you need to be authoritative enough to influence the consensus.
  • Provide verifiable, valuable contributions. Don’t just repeat what’s already been said.

Search engines and AI models look for agreement across trusted sources. To stand out, your content should either reflect that consensus or be credible enough to help shape it. This is building a brand, positive brand touchpoints, and working to be top of mind in your category.

Entity

Establish your brand or subject as a recognised and connected entity.

  • Use Digital PR to build relationships between your entity and others.
  • Gain notability through mentions, links, reviews, and co-occurrences.
  • Help algorithms understand who you are and why you matter.
  • Improve and build on your presence within the Knowledge Graph.

An entity is how search engines understand a person, brand, or topic. The more your name appears in the right places, connected to the right things, the more visible and trusted you become.

You don’t need more content

Here’s how to make your content AI-ready:

  1. Tune into real questions. Don’t just chase keywords, zero in on the actual questions your audience is asking. Sales calls, support logs, internal FAQS, even AI chat tools on your website are rich with signals.
  2. Deliver expert clarity. Create single-topic answers that are sharp, accurate, and free from jargon. Think clarity over cleverness; AI needs precision.
  3. Structure for AI visibility. Use clean HTML, intuitive headings and schema markup to help AI understand and prioritize your content.

These steps are grounded in the principles of Google’s E-E-A-T framework (Experience, Expertise, Authoritativeness, and Trustworthiness), which help ensure your content stands out in the age of AI search.

The bottom line is this: you don’t need more volume. You need answer-ready content that AI can easily understand and present.

Execute strategically, and your content won’t just exist, it’ll get discovered.

Mobilising your AI implementation team

AI visibility is not the responsibility of one team. It requires coordination across leadership, content, SEO, development and marketing.

  • Leadership must champion the strategic shift and allocate resources.
  • Content teams must write clear, concise answers designed to be interpreted and surfaced.
  • SEO specialists must structure content and implement the necessary markup.
  • Development teams must ensure technical crawlability and support site changes.
  • Marketing teams must identify key audience questions and use customer insight to inform content creation.

This is not a side project; it’s your next vital visibility sprint.

Now is the time to act

The landscape of content discovery is undergoing its most significant transformation in decades. While many businesses are still observing from the sidelines, AI engines like Google’s AI Overviews and conversational platforms like ChatGPT are already redefining how users find information.

These powerful generative AI systems often pull from content that’s weeks or even months old.

If you’re waiting for the dust to settle, you’re actively falling behind.

The answers you craft and optimise this month have the potential to be cited by AI throughout the rest of the year, establishing your authority and presence. This isn’t just about visibility – it’s about momentum. Brands who build now won’t just show up first. They’ll be trusted first.

How can we help?

If you’ve made it this far, it’s clear this matters to you.

Maybe you’re a CEO thinking through competitive shifts. A CMO rethinking how people find you. Or a CTO who’s seen the logs and knows that AI search is already changing the game.

The brands that will rise won’t just be louder. They’ll be clearer, more useful, and more discoverable. But you don’t need to figure it all out on your own.

Whether you’re exploring your first AI search sprint or creating a longer-term roadmap, SALT is here to help. From lightweight pilots to full-service partnerships, we meet you where you are. 

Let’s talk. Because when people search for answers, your brand should be in the conversation. 

The post From rankings to references: How to structure content for AI search appeared first on SALT.agency®.

]]>
The Infinite Tail: How AI has rewritten the rules of search discovery https://salt.agency/blog/the-infinite-tail/ Tue, 24 Feb 2026 11:17:31 +0000 https://salt.agency/?p=18879807 For a considerable time, searching online was a relatively straightforward exercise. Users would either rely on a short, memorable phrase, known as the “short tail,” or a significantly longer, highly specific phrase, referred to as the “long tail.” The long tail was characterised by queries containing three or more words, often reflecting a user’s advanced […]

The post The Infinite Tail: How AI has rewritten the rules of search discovery appeared first on SALT.agency®.

]]>
For a considerable time, searching online was a relatively straightforward exercise. Users would either rely on a short, memorable phrase, known as the “short tail,” or a significantly longer, highly specific phrase, referred to as the “long tail.”

The long tail was characterised by queries containing three or more words, often reflecting a user’s advanced stage in the buying or research process, such as “best value compact mirrorless camera 2024 reviews.”

The Long Tail concept was first articulated by Chris Anderson in 2004 and later expanded upon in his 2006 book of the same name.

While short-tail terms accounted for the highest search volume, the long tail represented a vast share of web traffic and conversion opportunities. This was due to their high specificity and lower competition.

However, the emergence of AI search has fundamentally transformed this landscape. It has rendered the old rules obsolete and introduced a new way users engage with the internet (and platforms) as a method of discovery, research, and purchasing.

I call this the Infinite Tail.

The defining feature of the Infinite Tail is its representation of an effectively unlimited and unmeasurable query space.

This stands in stark contrast to the short tail and long tail concepts, which were inherently based on a fixed, finite set of text-based keywords or phrases.

The Infinite Tail represents a combinatorial explosion of multimodal and conversational intent, signifying that there are no longer fixed questions.

Breaking up the search bar constraints

In the past, users often felt constrained, attempting to guess the ‘right’ words that would satisfy a search engine.

As a result, the SEO tool ecosystem began to focus on a limited set of conventional keywords, with ranking serving as the core success metric. AI search has eliminated these constraints and removed a lot of “search friction”, allowing people to express their intent in any manner they choose, whether it be typed, spoken, or image-based.

This is facilitated by conversational search, which encourages users to refine their search over multiple steps, much like a natural dialogue, and vernacular search, where people use everyday language rather than technical or constrained keyword phrases to find information. This is important when determining what the dominant, common interpretation of a query is.

This profound shift in user behaviour is underpinned by two key psychological theories.

The first is Information Foraging Theory, which suggests that users behave like hunters, constantly adjusting their queries based on the balance of effort versus reward. AI significantly reduces the effort, or friction, to near zero, encouraging users to experiment with broader and more complex requests.

The second is Cognitive Offloading, where users naturally tend to outsource difficult mental framing tasks. With AI search, they can simply describe their goal and allow the model to interpret and translate it, thereby removing the burden of crafting precise queries.

A practical illustration of the Infinite Tail in action can be seen in travel planning.

Instead of the traditional “Spain holiday November” a user might now ask, “Where should I go in November if I want quiet beaches and direct flights from Manchester?”

Alternatively, they might upload a beach clip from a platform like TikTok and inquire, “Where is this and can I go there for under £X?”

We’re also seeing this in the “education” that Google is pushing in advertising to encourage adoption of AI Mode in the “Search Like Never Before” campaign.

Understanding why users go online in the first place

We can take this a step further. It’s not just about the methodology or the way people search. It’s about understanding and categorising users based on why they go online in the first place. Back in 2024, when I spoke at several events, I shared the idea that there is a clear pattern between the impact of AI-driven search and the user’s underlying purpose. That purpose often aligns closely with the commercial intent behind a query.

With the rise of UCP and ACP and the evolution of agentic commerce protocols, we’re moving closer to a world where AI doesn’t just influence discovery but directly shapes commercial outcomes.

I argued that there are four core user groups:

  • Learners
  • Participators
  • Shoppers
  • Purchasers

Each group comes online with a different motivation, and that motivation affects how AI can influence their journey.

Users who want to learn or discover are exposed to a higher level of AI influence. Their journeys are more open-ended, which means AI can guide, shape, and even redirect their path in powerful ways.

Participators may use AI to explore new topics, find discussion spaces, or sense-check what’s being said in forums. However, they are often engaging within spaces they already recognise. Shoppers and purchasers follow a slightly different route. Historically, they would move toward an e-commerce site or another clear transaction point.

Now that agentic commerce is becoming more mainstream, AI agents are increasingly playing a more active role in the transaction layer, which will inevitably shift how and where purchases are made.

This makes the idea of the Infinite Tail even more relevant. In the past, users would stack queries in a relatively linear way, refining their searches step by step until they reached a final destination. Today, that journey is far less predictable. Users move fluidly across platforms, formats, and devices.

They interact with multimodal content, dip in and out over different timeframes, and rarely follow a straight path. The journey is no longer linear. It is distributed, dynamic, and shaped by AI at multiple touchpoints.

Because of this shift, we need to rethink how we approach SEO, including the basics like keyword research. The traditional model relied on a defined set of keywords as the primary measure of success.

Search goes beyond just words

Human thought processes are not limited to keywords. They include images, emotions, goals, and problems. Multimodal search reflects that reality. It allows users to skip complex written descriptions and let the system interpret the details instead. A photo or a screenshot, for example, can communicate intent in seconds.

As new formats like voice and image become more common, the number of possible inputs increases dramatically. The Infinite Tail is not just about more text-based searches. It represents an expansion in the ways people can express what they want and need.

In this new environment, the idea that models generalise sits at the centre of everything. AI search systems now focus on matching meaning rather than matching exact keywords. They rely on semantic understanding instead of string comparison.

This works because semantic embeddings map both web content and user queries into a shared vector space. In that space, the system can measure conceptual similarity, even when the wording is completely different. A page does not need to repeat the exact phrasing of a query to be relevant.

The model can interpret a broader intention, such as “a safe and easy family holiday spot”, and connect it to related concepts like calm beaches or family-friendly restaurants, even if those phrases do not appear in the original query.

This shift in semantic capability is one of the biggest opportunities the Infinite Tail creates.

Content is no longer limited to the exact terms a creator optimised for. It can surface for questions, contexts, and variations the creator never predicted. This expands potential reach and visibility in ways that were not possible under a purely keyword-driven model.

Decisions are multi-step journeys

Search is no longer a one-time action.

People don’t just type a query and click a result. They typically either follow a query-stacked path of asking, reviewing, and continuously adjusting, or as we’re seeing in the case of AI Search and the lack of clicks to websites (which historically informed our attribution metrics), they read, remember, and return (if the messaging synthesised by the AI resonated with them).

In this setting, the Infinite Tail is better understood as all the possible paths a user can take, not just the first keywords they enter.

Two psychological ideas help explain how people act during these multi-step searches:

  • Choice Overload: In areas like travel and online shopping, there are often too many options. This can overwhelm users, so they rely on AI to filter choices and simplify decisions.
  • Goal-Gradient Effect: As people get closer to making a final decision, their motivation increases. Their questions become more specific and focused. Broad searches turn into detailed “micro-queries” as they move toward a clear goal.

Planning a holiday shows how this works in practice.

The process often unfolds in five stages. It begins with a broad question, such as “Warm places in Europe in April.” Next, the user adds a constraint: “Which is cheapest for a family of four?”

Then comes more detailed filtering: “Show me hotels with pools and restaurants nearby.”

After that, the user might upload a screenshot and ask, “Is this area walkable and safe?” The final step shifts to action: “Plan a five-day itinerary if we stay here.” Each step builds on the last. The system remembers the earlier context, allowing the search to grow into a structured conversation that moves steadily toward a decision.

Is “ranking” now a probability game?

In the Infinite Tail, success is no longer about ranking for a single keyword. It depends on how likely your content is to satisfy clusters of related user needs. Instead of competing for exact phrases, brands compete to meet the broader intent behind many variations of a search.

Probabilistic ranking captures this shift. Search systems estimate the probability that a page will satisfy an inferred intent cluster rather than match a specific keyword. The focus moves from words to user goals and context. A page ranks because it is predicted to be the best fit for that situation.

Users judge the quality of the final answer, not the individual links behind it. Ranking is based on a blend of signals, including semantic relevance, user behaviour, multimodal consistency, and Large Language Model re-ranking. The outcome is a unified response designed to satisfy intent.

For brands, this means the universe of possible queries is too vast to track individually. Success comes from clearly covering real use cases, aligning content and visuals with intent, demonstrating trust, and structuring information so AI systems can interpret it easily.

AI and multimodal search have expanded intent beyond traditional limits. SEO is no longer about targeting a narrow set of phrases. The goal is to become the single best answer across countless micro-intents, gaining visibility for new queries and appearing across more stages of the user journey.

Talk to our team about preparing your search strategy for the Infinite Tail.

The post The Infinite Tail: How AI has rewritten the rules of search discovery appeared first on SALT.agency®.

]]>
History is repeating itself: The return of Google penalties in the age of AI https://salt.agency/blog/ai-seo-risk-google-manual-actions/ Mon, 23 Feb 2026 10:13:17 +0000 https://salt.agency/?p=18879803 Over the past year or so, I’ve seen a significant increase in the number of businesses affected by Google algorithmic and manual actions. In most cases, these actions are the result of activities carried out to try to increase performance – either directly in search results, or to indirectly influence AI and generative LLMs. In […]

The post History is repeating itself: The return of Google penalties in the age of AI appeared first on SALT.agency®.

]]>
Over the past year or so, I’ve seen a significant increase in the number of businesses affected by Google algorithmic and manual actions.

In most cases, these actions are the result of activities carried out to try to increase performance – either directly in search results, or to indirectly influence AI and generative LLMs. In this article, I’ll attempt to explain why we’re seeing this surge, and to highlight that it is really just history repeating itself.

Google has waged a long-running war on spam. What we’re experiencing now follows a relatively calm period – much calmer than the old days of named core algorithm changes such as Panda, which focused on content quality, and Penguin, which targeted manipulation of link graphs. Those individually named algorithms are long gone, replaced by broader core updates that bundle many things together: improvements to the overall algorithm, but also protection of it – weeding out the things Google doesn’t want there. Manual actions often follow these core updates, because that’s when Google has run the data sets and found that changing the algorithm makes sense alongside applying a manual action where something is severe enough.

Those actions can operate at a page-by-page level, or they can be site-wide for the most manipulative of activities. The key driver of the current surge is, predominantly, the growth of AI-generated content – and specifically, the production of that content at scale.

I’ve been working in Google algorithmic and manual action analysis for many years, going back to the Panda and Penguin era – a period that resulted in actions against hundreds of brands and domains of all sizes, from major international businesses down to smaller affiliates. So, while AI might be new, the reason behind the enforcement cycle isn’t.

Understanding why Google acts

Before we talk about manual actions and spam specifically, it’s important to zoom out and understand why Google acts this way in the first place and why it has quality guidelines at all.

At its core, Google is not a public utility. It is a commercial platform, and its entire business is built on user retention and market share. Users generate demand, and that demand allows Google to sell advertising, where the vast majority of its revenue comes from. So it’s critically important that the environment where that advertising is sold, whether in traditional search or increasingly within AI, is one that users can trust. If users stop trusting the results (if the content isn’t credible, reliable, or correctly authored) they leave and go elsewhere. And if they do, there are fewer people to show adverts to, which means less advertising revenue. It goes full circle.

We’ve seen this play out clearly across platforms. A really good example is what happened with X following Elon Musk’s takeover of Twitter. When trust drops, people walk. And ultimately that means fewer searches, weaker advertising inventory, and less revenue.

So spam penalties and manual actions are not moral judgements. The number of people I’ve spoken to over the years who feel like Google is personally attacking them (often because they can see a competitor doing outrageous things and apparently getting away with it year after year) speaks to how widely this is misunderstood. Businesses make a lot of money from organic search, especially in a world where cost-per-click advertising is getting more expensive and more opaque. But the reality is that Google does not act simply because it wants to cause pain. It acts because manipulation exists, and when that manipulation reaches a scale that visibly degrades user trust, enforcement becomes necessary. Enforcement, like almost everything in life, lags behind the market. But it comes.

The technology changes, the pattern doesn’t

There are good parallels here with other areas – e-bikes, novel drugs – in any of these cases there’s typically a significant gap between when a technology or tactic is invented and when enforcement catches up. It only becomes urgent when it starts to scale up and becomes more of a threat. That threat threshold is what triggers action.

If you’d told people ten years ago that AI would be writing pages of content specifically to rank in search, they wouldn’t have believed you. But producing content in ways designed to game the system isn’t new at all. In the old days, you didn’t even need to produce unique content, so people would just duplicate the same content again and again, and it would rank and generate revenue from traffic. Then came content spinning with tools like Best Spinner, which created just enough variation to evade detection. Today’s technology allows far more sophisticated things to happen, but the pattern is the same. What’s changed is the cost and the scale.

Looking back at Google’s spam cycles: pre-Panda, we had content farms and thin affiliate websites. With Panda, it was low-quality and duplicate content at scale. With Penguin, it was link manipulation and network tactics. Post-Penguin, there was a lull (and far fewer mass penalties). People learned the lesson, changed their tactics, and adapted to Google’s quality signals. And today, that same cycle has accelerated again, significantly, with AI.

Google doesn’t kill these tactics immediately. They aren’t always the highest priority, and the cost of addressing them at scale is significant. But when it comes down, it comes down like a tonne of bricks. That can mean anything from your review schema being removed from search, to individual pages being de-indexed, to weighting being applied against specific phrases and keywords or, at the most severe end, your entire site being removed from the index altogether.

Why AI content is under scrutiny

Google has been clear that it does not penalise content simply because it was generated by AI. The issue is manipulation – content that adds no value, produced purely to game the algorithm. If you’re just generating content at scale without adding anything of substance, it costs Google money to crawl and index all of those pages. And if everyone is producing low-quality content, the whole internet starts to sink. These penalties exist to stop the gamification of the algorithm.

The failure patterns we’re seeing now include topical maps generated without real expertise, content produced to satisfy algorithms rather than humans, and internal linking structures built for search engines rather than users – modern equivalents of page rank sculpting. All of these things worked until they didn’t.

What’s particularly significant this time is that the cost of content production (which was always a meaningful barrier) has effectively collapsed. Quality content has always been expensive. Meaningful change without human intervention has always been expensive. AI changes that entirely, and where that gap opens up, the abuse grows. People are using AI to produce content right now without looking at the spikes in the data, and they’re exposing themselves to ever-greater risk. It’s also worth remembering that scale doesn’t just amplify the risk, it amplifies the identifying data points. The more you produce, the easier it becomes to detect the pattern.

There’s also a broader dimension here beyond traditional search. LLMs often take inspiration not just from historical training data but from current, live web content to retrieve up-to-date information. The quality and credibility of your web presence matters not only for how you rank in Google, but increasingly for how you appear (or don’t appear) within AI-generated responses.

Algorithmic actions vs. manual actions

Algorithmic actions have always been harder to diagnose. They’re often mislabelled as general core update volatility, when in reality something more specific is happening. Manual actions are more explicit. They follow algorithmic updates because that’s typically when Google has run its new data sets and identified cases where the pattern is severe enough to warrant human intervention and a formal action.

In the old days, you could go into Google Webmaster Tools and (particularly with Bing’s Webmaster Tools, which were remarkably transparent) speak to people directly and find out whether an action had been applied against you. That level of visibility is largely gone. But the actions themselves are very much alive, and we’re seeing a significant increase in them.

What businesses need to understand

In SEO, everyone wants the path of least resistance. Everyone wants to do things as cheaply and quickly as possible to see the maximum return. But doing those things always carries risk and the important thing is to be fully conscious of that risk before taking it. Short-term gains can often mean long-term losses. What goes up can come down.

There is a lot of danger right now in businesses using AI as the solution rather than as a tool. You still need to understand the context, know why you’re using it, and be clear-eyed about the risks. Manipulation will always be the trigger, and scale always magnifies both the risk and the data points that make it identifiable.

My mindset has always been about doing things the right way. Not trying to manipulate anything, but actually working to be the best and winning on that basis for long-term growth. Chasing algorithms that work today but not tomorrow is not a strategy. It’s a gamble.

Google is not going away. Whatever the rise of AI and LLMs means for how people discover information, Google remains enormously important – and it has decades of experience in dealing with spam and curating quality at scale. Beyond search, it controls Android, Maps, and a vast ecosystem of tools through which users are constantly generating data. Whoever owns the data is responsible for keeping it clean. Low-quality data isn’t worth much to anyone.

Closing thoughts

As we move further into 2026 and the rise of AI continues, it’s really important to focus on the “why”, not on gaming the system. C-suite and board members need to be aware of these dynamics and bake them into their SEO strategy. Google’s quality guidelines exist for a reason, and enforcement cycles exist for a reason. The businesses that understand this and build accordingly are the ones that achieve durable, long-term visibility in search.

This is not new. It’s history repeating itself. The AI is new. The reason it’s happening isn’t.

If you’re unsure whether your current SEO strategy is sustainable, now is the time to review it. Get in touch and book a strategic SEO audit and understand your risk before Google does.

The post History is repeating itself: The return of Google penalties in the age of AI appeared first on SALT.agency®.

]]>