The Good https://thegood.com/ Optimizing Digital Experiences Wed, 25 Feb 2026 17:54:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 How to Use the Double Diamond UX Research Framework to Solve Real Problems https://thegood.com/insights/ux-research-framework/ Wed, 25 Feb 2026 17:54:24 +0000 https://thegood.com/?post_type=insights&p=111399 When teams reach out to The Good, they’re usually facing a specific challenge: conversion rates have plateaued, a new feature isn’t getting adopted, or they’re not sure which direction to take with a redesign. They don’t need research for research’s sake. They need a clear path from “we have a problem” to “here’s what to […]

The post How to Use the Double Diamond UX Research Framework to Solve Real Problems appeared first on The Good.

]]>
When teams reach out to The Good, they’re usually facing a specific challenge: conversion rates have plateaued, a new feature isn’t getting adopted, or they’re not sure which direction to take with a redesign.

They don’t need research for research’s sake. They need a clear path from “we have a problem” to “here’s what to do about it.”

Research should systematically move you from understanding problems to implementing solutions. No guesswork. No dead ends. Just a repeatable process that gets you actionable answers.

That’s what a double diamond UX research framework does.

The double diamond UX research framework: design the right thing, then design the thing right

We wanted to share more about the double diamond UX research framework after seeing too many teams jump straight from problem to solution, without doing the work of understanding whether they’re solving the right problem in the first place.

The framework works because it forces teams to slow down in the right places and speed up in others. The framework is named for its diamond shape that emphasizes diverging to explore, then converging to decide.

model by The Good illustrating the phases of the double diamond ux research framework.

You do this twice. First to ensure you’re solving the right problem, and then again to ensure you’re solving it the right way.

The first diamond is to design the right thing:

​​Expand your understanding of the problem space, then narrow it to the most important opportunities.

  1. Define problems (research)
  2. Deduce opportunities (synthesis)

The second diamond is to design the thing right:

Explore multiple solution approaches, then validate which one actually works.

  1. Create choices (ideation)
  2. Evaluate options (implementation)

Each phase has a specific purpose, uses different research methods, and answers different questions.

When to use this framework

This UX research framework works best when:

  • You’re stuck. Something’s not working, and you’re not sure why. The framework helps you diagnose root causes instead of treating symptoms.
  • You have options. You’ve got a few directions you could pursue, but need validation before committing resources.
  • Stakes are high. You’re making changes to core experiences, entering new markets, or redesigning critical flows where getting it wrong is expensive.
  • You want speed without guesswork. You need answers quickly but can’t afford to be wrong.
  • You need team alignment. Different stakeholders have different opinions, and you need objective data to move forward.

How it works phase by phase

But of course, this framework is only useful if you know how to apply it! Here’s a breakdown of each phase: when you need it, which research methods to use, and real examples of how we’ve used it to solve specific problems.

Phases 1-2: Design the right thing (generative research)

The first diamond uses the same research toolkit for both defining problems and deducing opportunities. What changes is how you apply these methods:

Phase 1 (Define Problems): You’re gathering data, uncovering what’s broken, and understanding the full scope of user pain points.

Phase 2 (Deduce Opportunities): You’re synthesizing that data, identifying patterns, and prioritizing which problems matter most.

Research toolkit:

  • User interviews: Understand motivations, mental models, and pain points
  • User testing: Watch real users interact with your product to identify friction
  • Heatmap analysis: See where users actually click, scroll, and spend time
  • Data analysis: Identify patterns in behavior and correlations with outcomes
  • Journey analysis: Map complete user experiences to find friction points
  • Surveys: Gather quantitative data on behaviors and preferences
  • Ethnographic analysis: Observe users in their natural context to understand real-world behavior

Phase 1: Define problems through generative research

Before you can solve anything, you need to understand what’s actually broken. This is where generative research comes in. This research is designed to uncover opportunities and pain points you might not even know exist.

When you need this phase:

  • You know something’s not working, but can’t pinpoint what
  • You’re entering a new market or launching a new product
  • Conversion rates have plateaued, and you’re not sure why
  • You’re getting conflicting feedback from different user segments

Case study:

The problem: A SaaS productivity platform noticed that users who adopted their collaboration feature had 40% better retention, but only 12% of new users ever tried it. The product team had already attempted several fixes, including in-app tooltips, email campaigns, and adding the feature to the main navigation, but none moved the adoption needle. They knew they had a problem, but couldn’t pinpoint the root cause.

What we did: We ran user testing with 20 recent sign-ups, watching them complete typical workflows while thinking aloud. We analyzed heatmap data to see where users looked during key moments. We conducted interviews with 12 users (6 who adopted the feature, 6 who didn’t) to understand their mental models and pain points. Finally, we analyzed behavioral data across 50,000 user sessions to identify patterns in how people discovered features.

What we found: Most users saw the feature prompts, so the issue wasn’t awareness. The problem was timing and relevance. Users ignored prompts for the collaboration feature when they appeared during solo work. But when users received documents with stakeholder comments, they actively looked for ways to coordinate feedback. The feature solved a real problem, but only at specific moments. Showing it at the wrong time made it feel like noise.

Phase 2: Deduce opportunities through synthesis

Raw research data doesn’t solve problems, but synthesis does. This phase is about taking everything you learned and distilling it into clear, prioritized opportunities.

When you need this:

  • You have research findings, but need to prioritize
  • Stakeholders are divided on which direction to pursue
  • You need to build a business case for investment
  • You’re trying to align multiple teams around a shared strategy

Case study:

What we did: We took the same data from Phase 1 but shifted our analysis to look for patterns and opportunities. We mapped user journeys to identify the specific trigger moments when collaboration pain became acute. We segmented users by workflow type to understand who encountered collaboration needs most frequently. We analyzed the correlation between feature discovery timing and adoption rates.

The opportunity: Instead of promoting the feature broadly or adding more persistent navigation, we identified a single high-value trigger: show the collaboration feature contextually when users open documents containing comments or tracked changes. This represented the precise moment when users experienced the pain this feature solved.

The impact potential: Based on session data, 43% of new users encountered stakeholder-reviewed documents within their first week. If we could convert just 50% of those moments into feature trials, we’d nearly triple overall adoption.

What happened next: We had a clear, data-backed opportunity. Now we needed to figure out how to present this feature at that contextual moment. That’s where Phase 3 came in.

Phase 3: Create choices through competitive analysis

Once you know what problems to solve, you need to explore how to solve them. This phase is about generating multiple possible solutions and understanding how others have approached similar challenges.

Research toolkit:

  • Competitive analysis: Study how industry leaders solve similar problems
  • Landscape analysis: Identify patterns across different approaches

When you need this:

  • You need to narrow down design directions
  • Stakeholders want to see options before committing
  • You’re redesigning an existing feature and need alternatives
  • You’re unsure which approach will resonate with users

Case study:

The problem: We knew when to show the collaboration feature, but we didn’t know how. Should it be a modal that interrupts the workflow? A subtle banner? A tutorial overlay? Each approach had trade-offs, and stakeholders had different opinions about what would work.

What we did: We conducted a competitive analysis of 12 productivity and collaboration tools, studying how they introduced features at contextual moments. We analyzed landscape patterns across project management, document editing, and communication platforms to understand what users had learned to expect.

What we found: The highest-performing patterns shared three characteristics:

  1. They acknowledged the user’s current task (“We see you’re reviewing comments”)
  2. They offered immediate value related to that task (“Collect all feedback in one place”)
  3. They provided a low-friction way to try (“Add your first comment now”)

The choices we created: Based on these patterns, we designed three variations:

  1. Inline prompt: A compact banner within the document that appeared next to existing comments
  2. Modal introduction: A full-screen overlay explaining the feature with a demo
  3. Progressive disclosure: A subtle sidebar element that expanded when users hovered near comments

Where this led: We had three viable approaches grounded in what worked for others. In Phase 4, we tested which resonated most with actual users.

Phase 4: Evaluate options

Before you ship, you need to validate. This final phase uses evaluative research methods to test solutions quickly and make confident decisions.

Research toolkit:

  • User testing: Validate whether users can actually complete tasks
  • Prototype testing: Test concepts before full development
  • Rapid testing: Get fast feedback on specific design decisions (preference tests, first-click tests, tree tests, design surveys, card sorts)
  • Sentiment testing: Measure emotional response to different approaches
  • A/B testing: Compare variations with real users in production

When you need this:

  • You have a few strong directions and need to pick one
  • You want to validate before investing in full development
  • You need fast feedback to maintain momentum
  • You’re making changes to high-traffic, high-value experiences

Case study:

The problem: All three approaches looked good on paper, but we needed to know which one users would actually respond to. We didn’t have time to build and A/B test all three variations in production. We needed a faster way to validate the direction before investing in full development.

What we did: We created rapid prototypes of all three approaches and ran user testing with 24 people who matched our target audience. We showed each user a realistic scenario: opening a document with stakeholder comments. Then we measured task completion (could they figure out how to use the collaboration feature?), time to first action (how quickly did they engage?), and sentiment (how did they feel about the interruption?).

What we found: The inline prompt won on every metric, and users described the inline prompt as “helpful” and “right when I needed it,” while the modal felt “intrusive” and progressive disclosure was “too easy to miss.”

The decision: The team built and shipped the inline prompt, and feature adoption increased.

Why this worked: By following all four phases of the double diamond ux research framework, we didn’t just guess at a solution. We systematically moved from “we have a problem” (low adoption) to “here’s the root cause” (timing and relevance) to “here’s how others solve this” (contextual introduction patterns) to “here’s proof this will work” (validated direction through testing). Each phase built on the last, reducing risk at every step.

Why this UX research framework works

The double diamond framework prevents the most common research mistakes:

It prevents solution jumping

Teams often skip straight from “we have a problem” to “here’s what we’ll build.” Our framework forces you to first understand the problem deeply (Phase 1), then synthesize opportunities (Phase 2) before exploring solutions (Phase 3) and validating them (Phase 4).

It matches research methods to questions

Generative research (Phases 1-2) answers “what problems exist?” Evaluative research (Phases 3-4) answers “does this solution work?” Using the wrong method for your question wastes time and money.

It creates natural checkpoints

Each phase ends with a clear deliverable: defined problems, prioritized opportunities, multiple solutions, or validated direction. Stakeholders can weigh in at the right moments without derailing progress.

It scales to any timeline

Need an answer this week? Run a rapid test. Have a month? Go deeper with interviews and competitive analysis. Have a quarter? Complete the full cycle. The framework adapts to your constraints while maintaining rigor.

How you can adapt the framework to increase revenue from your digital experience

While it would be ideal to run the full framework on all product decisions, we don’t always have the luxury of time. Here’s how we typically adapt the framework depending on project needs.

Fast decision (1-2 weeks): Skip straight to Phase 4 with rapid testing when you have clear options and need directional data quickly.

Optimization project (2-4 weeks): Focus on Phases 1-2 (define problems, deduce opportunities) when you need to diagnose what’s broken and prioritize fixes.

Strategic initiative (4-8 weeks): Full cycle through all four phases when you’re entering new territory or making high-stakes changes.

Ongoing partnership: Rotate through phases as needs evolve. We may use competitive analysis this month, user testing next month, and rapid testing the month after.

The key is matching research depth to risk and constraints.

You’re not trying to run the most comprehensive study possible. Instead, you’re trying to give yourself the confidence to make the right decision with the time and budget you actually have.

Ready to solve your research challenge?

The double diamond framework gives you a proven way to move from problem to solution. Whether you need a quick rapid test to validate a design decision or a comprehensive research engagement to inform strategy, we adapt this framework to your specific needs.

The teams we work with don’t need more data; they need clarity. They need to know what’s broken, why it matters, and what to do about it. That’s exactly what this UX research framework delivers.

And when you work with The Good, here’s what research deliverables look like:

  • Problem definition: Clear documentation of user pain points, friction in the experience, and opportunities ranked by impact and effort.
  • Competitive analysis: Side-by-side comparison of how others solve similar problems, with specific recommendations for what to adopt, adapt, or avoid.
  • User testing results: Video clips of real users, annotated with insights, organized by finding severity, and accompanied by specific design recommendations.
  • Rapid test reports: Statistical analysis of user preferences with clear winners, qualitative feedback explaining why users chose what they did, and next-step recommendations.
  • Strategic recommendations: Prioritized roadmap of what to build, test, or optimize based on research findings and business goals.

Everything is actionable. We don’t just tell you what we found, we tell you what to do about it.

Want to see how this framework would apply to your specific challenge? Let’s talk about your research needs.

The post How to Use the Double Diamond UX Research Framework to Solve Real Problems appeared first on The Good.

]]>
How to Test Your Pricing Strategy Without the Ethical Minefield https://thegood.com/insights/how-to-test-your-pricing-strategy/ Thu, 05 Feb 2026 23:45:56 +0000 https://thegood.com/?post_type=insights&p=111293 We’ve heard it many times before. “Can we A/B test pricing?” It’s tempting. The allure of real-time, live data showing exactly which price point converts better feels like the holy grail of product optimization. Fire up your testing platform, split traffic between $29 and $39, and let the numbers tell you what to charge. But […]

The post How to Test Your Pricing Strategy Without the Ethical Minefield appeared first on The Good.

]]>
We’ve heard it many times before. “Can we A/B test pricing?”

It’s tempting. The allure of real-time, live data showing exactly which price point converts better feels like the holy grail of product optimization. Fire up your testing platform, split traffic between $29 and $39, and let the numbers tell you what to charge.

But price testing is an ethical and legal minefield that can damage customer trust and put your brand at risk.

After 16 years of optimizing digital experiences, we’ve seen this scenario play out dozens of times. A client comes to us excited about testing prices, we dig into what that actually entails, and we end up recommending something entirely different: pricing research.

The difference matters. A lot.

Why we don’t recommend traditional price testing

While A/B price testing isn’t explicitly illegal in most jurisdictions, it occupies a murky grey area that should make any brand leader pause.

In the United States, price testing is generally legal. The Robinson-Patman Act prohibits certain forms of price discrimination, but its scope is narrow, primarily applying to business-to-business sales of commodities where different pricing harms competition. For most consumer-facing businesses, the Act rarely applies, and violations are difficult to prove.

In the European Union, however, the situation is different. According to EU law, charging customers differently based solely on their nationality is illegal. Even in random A/B tests where nationality isn’t the determining factor, if a French customer pays more than a Belgian customer for the same product, you could face fines if complaints are filed with the European Consumer Centre.

Recent research published in the Journal of Revenue and Pricing Management highlights how pricing executives must now navigate the triangulation of legal constraints, ethical considerations, and algorithmic decision-making when setting prices.

The consumer perception problem

Beyond legality, there’s the court of public opinion.

A 2022 study from Phiture found that different generational groups react very differently to personalized pricing. While Gen X consumers sometimes try to “game the system” (clearing cookies or using incognito mode to search for better deals), many Millennials and most Boomers react negatively when they discover they’re being charged different prices than other customers.

The Instacart case proves this isn’t theoretical. A Consumer Reports survey conducted in September 2025 found that 72% of Instacart users did not want the company to charge different prices to different users for any reason.

When the investigation revealed the extent of the price testing, customers described feeling “manipulated,” “deceived,” and said they were “not as trusting of a company that practices that.” One volunteer specifically said: “All prices should be the same for everybody, whether you’re rich or poor… some people are going to have to fight back against that system.”

Within weeks of the investigation’s publication, Instacart discontinued the practice entirely, a clear signal that the reputational risk outweighed any revenue optimization gains.

Most consumers view price discrimination as fundamentally unfair, even when it’s legal. When customers discover they paid more than someone else for the exact same product at the exact same time, trust erodes quickly. And once lost, that trust is expensive to rebuild.

The technical limitations

While platforms like Shopify support native price testing functionality, testing tools typically don’t have the infrastructure to modify your actual pricing across different customer segments reliably. Even if you’re comfortable with the ethical considerations, there are practical barriers.

The margin problem

As expert pricing research from Paddle notes, even legal pricing strategies become unethical when they ignore fundamental business health. Simply optimizing for conversion without understanding contribution margin can lead you to “win” tests that actually hurt your bottom line.

Sure, we might see that Product A sold more units than Product B at a given price point, but which product has better margins? That difference fundamentally impacts whether a price change is actually driving profitability or just revenue.

The smarter alternative: pricing research

After explaining these challenges, clients often ask: “So what should we do instead?”

Structured pricing research. Rather than testing prices live on your site where you’re charging real customers different amounts, conduct research that reveals willingness to pay, price sensitivity, and optimal price points before you go to market.

Pricing research gives you the insights of price testing without the ethical baggage, legal risk, or customer trust issues. Here are the primary methodologies we recommend:

Van Westendorp Price Sensitivity Meter

Developed by Dutch economist Peter Van Westendorp in 1976, the Price Sensitivity Meter (PSM) remains one of the most effective ways to identify acceptable price ranges for products.

The methodology is elegant in its simplicity. You survey your target customers with four key questions:

  1. At what price would you consider this product too expensive to purchase?
  2. At what price would you consider this product expensive, but still worth considering?
  3. At what price would you consider this product a bargain?
  4. At what price would you consider this product so inexpensive that you’d question its quality?

By plotting cumulative responses to these questions, you can identify several critical price points:

  • Point of Marginal Cheapness (PMC): The intersection of “too cheap” and “expensive” lines, which indicates your lower bound
  • Point of Marginal Expensiveness (PME): The intersection of “too expensive” and “cheap” lines which indicates your upper bound
  • Optimal Price Point (OPP): Where an equal number of respondents describe the price as exceeding either their upper or lower limits
  • Indifference Price Point: Where the same number of people think the price is “too expensive” as those who think it’s a “bargain”
van westendorp price sensitivity meter as a strategy for how to test your pricing strategy

According to research from SurveyKing, Van Westendorp is particularly valuable for identifying pricing thresholds and overall market perceptions without putting actual customers in a position where they’re being charged inconsistently.

When to use it: Van Westendorp excels for new-to-world products where you’re establishing an initial price point, or when repositioning an established product in a new market segment. It’s also fast to implement because you can run a Van Westendorp study in days, not weeks.

Limitations to know: The method focuses solely on price perception without considering product features or competitive context. It also can’t predict actual purchase behavior, only price expectations. As noted in research from Conjointly, if your product has multiple configurations or you need to understand feature-specific value, other methods may be more appropriate.

Conjoint analysis

If Van Westendorp is the quick mission, conjoint analysis is the full strategic assessment.

Conjoint analysis reveals how customers value different product attributes, including price, by forcing them to make trade-offs between product profiles. Rather than asking “What would you pay for this?”, conjoint presents respondents with complete product profiles that vary across multiple dimensions (features, brand, price, etc.) and asks them to choose which they’d buy.

For example, a project management software might test profiles varying:

  • Number of team members included (5, 15, or 50)
  • Storage capacity (10GB, 50GB, or 250GB)
  • Integration options (3, 10, or unlimited)
  • Price ($19/month, $49/month, or $99/month)

Respondents see sets of 3-4 profiles at a time and select their preference. The pattern of choices reveals the relative value of each attribute, including price sensitivity.

Choice-based conjoint (CBC) is particularly powerful for pricing research because it simulates realistic purchase scenarios. Respondents don’t know you’re primarily interested in pricing; they’re just choosing products they’d actually buy. This approach delivers more honest insights than directly asking about willingness to pay.

Why it works: Conjoint lets you measure price elasticity by brand, understand optimal feature-price combinations, and run market simulations to predict revenue and share. Research from GLG shows that with conjoint data, you can model hypothetical scenarios: “If we add this feature and increase the price by $10, how many customers will we gain or lose?”

When to use it: Conjoint shines when you need to understand how price interacts with product features, or when you’re pricing complex offerings with multiple tiers or bundles. It’s the gold standard for SaaS pricing strategy because it captures the reality that customers evaluate price in context, not isolation.

What to expect: Conjoint requires more upfront investment than Van Westendorp, both in study design and sample size. You’ll need larger respondent pools (typically 300+ for reliable results), and the analysis is more sophisticated. But the insights are proportionally richer.

Segmentation and historical data analysis

Sometimes the best pricing insights are hiding in your own data.

Before running any new research, we always recommend examining your existing customer base through a segmentation lens. Different customer segments often have dramatically different price sensitivity.

Research from TRC Insights shows that price elasticity, the measure of how demand changes with price, varies significantly across customer segments. Enterprise buyers might be relatively price-insensitive (inelastic demand) for mission-critical tools, while small businesses might be highly price-sensitive (elastic demand) for the same product.

By analyzing your historical data, you can identify:

  • Which segments have the highest lifetime value at different price points
  • How acquisition cost varies by price tier across segments
  • Retention patterns that indicate whether pricing is aligned with value delivery
  • Upgrade and downgrade patterns that reveal price ceiling and floor effects

One telecommunications company we know of analyzed years of customer data to understand price elasticity by segment. They discovered that their “small business” segment was actually three distinct sub-segments with wildly different price sensitivities:

  • one that behaved like enterprise (low elasticity)
  • one that behaved like consumers (high elasticity)
  • and one in between

This insight led them to redesign their entire pricing strategy with separate offers for each sub-segment, ultimately increasing revenue by 10%+.

When to use it: Always. Historical data analysis should be your starting point for any pricing decision. It’s low-cost (you already have the data), fast, and often reveals surprising patterns.

Gabor-Granger method

For a more direct approach to estimating demand curves, the Gabor-Granger method offers a middle ground between Van Westendorp and conjoint analysis.

The process is straightforward: show respondents a product at a specific price and ask if they’d buy it. If yes, show a higher price. If no, show a lower price. Continue until you map out their individual purchase threshold.

Example of Gabor-Granger method as a method for how to test your pricing strategy

Aggregate these responses across your sample, and you can build demand curves that predict:

  • The percentage of your market that will buy at each price point
  • The revenue-maximizing price
  • The volume-maximizing price
  • Price elasticity at different levels

This can be particularly useful when you need quick market assessments and want to focus specifically on price sensitivity without evaluating multiple product attributes simultaneously.

When to use it: Gabor-Granger works well for single products or when product attributes are already determined, and you need to optimize pricing specifically. It’s faster than full conjoint but more direct about pricing than Van Westendorp.

Understanding price elasticity for better decisions

All of these methodologies ultimately help you understand price elasticity, how changes in price affect demand for your product.

Price elasticity is typically expressed as: % change in quantity demanded ÷ % change in price

Products with elastic demand (elasticity > 1) see large changes in demand with small price changes. Think luxury goods, or products with many substitutes. Products with inelastic demand (elasticity < 1) see relatively stable demand despite price changes. Think of necessities or products without good alternatives.

Understanding your product’s elasticity is crucial because it determines your pricing strategy’s impact. For elastic products, lowering prices can increase total revenue. For inelastic products, you might be leaving money on the table by not charging more.

Here’s what makes elasticity even more interesting: it’s not fixed. The same product can exhibit different elasticity depending on:

  • Customer segment: Enterprise buyers vs. SMBs vs. individual consumers
  • Time period: Demand becomes more elastic over time as customers adjust their behavior
  • Market conditions: Economic downturns increase price sensitivity even for traditionally inelastic goods
  • Price range: Products can be inelastic at low prices but highly elastic at high prices

Understanding these nuances helps you make smarter pricing decisions across your entire customer base, not just at a single price point.

A real example: how we approach pricing strategy

Here’s how these methodologies come together in practice.

A B2B SaaS company approached us, concerned that their pricing wasn’t optimized. They had three tiers ($49/month, $149/month, and $499/month) that had been set somewhat arbitrarily three years ago based on “what felt right” and competitive benchmarking.

Rather than jumping into A/B testing prices, here’s the path we recommended:

Phase 1: Data analysis

We started by analyzing their existing customer data:

  • Segmented customers by industry, company size, and usage patterns
  • Calculated lifetime value and retention by segment and tier
  • Mapped upgrade/downgrade patterns to understand price ceiling effects
  • Identified which features correlated with willingness to pay premium prices

This revealed that their “mid-market” segment was actually two distinct groups with different needs and willingness to pay.

Phase 2: Van Westendorp study

We ran a Van Westendorp survey with 400 prospects and recent customers across identified segments. This quickly established:

  • Their $49 tier was perceived as “too cheap” by 30% of respondents, potentially signaling quality concerns
  • There was an acceptable price range between $79-199 for their middle tier
  • Their top tier had room to increase to $599-699 based on value perception

Phase 3: Conjoint analysis

With price ranges identified, we ran a choice-based conjoint to understand:

  • Which features justified premium pricing
  • How different customer segments valued different feature bundles
  • Optimal price points for proposed new tiers

The conjoint revealed that their original three-tier structure was actually constraining revenue. There was demand for a fourth tier at $799/month for enterprise features, and their middle tier could be split into two offerings at $99 and $199.

Results

The company implements a new four-tier pricing structure ($69, $119, $239, $799) based on the research. Average revenue per customer would increase 23%. Customer acquisition would actually improve (lower entry price brings in more customers who later upgrade). Retention holds steady despite price increases because the value alignment was better

This approach would take 8 weeks, and the cost would be well worth it compared to the potential brand damage of customers discovering they’d been charged different prices in an A/B test, or the opportunity cost of not optimizing pricing at all.

Making the case for pricing research internally

If you’re reading this thinking, “this makes sense, but my team really wants to just A/B test prices,” here’s how to make the case:

Frame it as risk management

Price testing puts your brand reputation at risk. Pricing research gives you comparable insights without exposing you to customer backlash, legal concerns, or PR problems. Maintaining customer trust through transparent, ethical pricing practices is crucial for long-term profitability.

Emphasize the quality of insights

A/B tests tell you which price performed better in one specific context at one specific time. Pricing research tells you why that price works, how different segments perceive value, and how pricing interacts with features and positioning. Those insights compound over time.

Talk about margin, not just conversion

This one resonates with CFOs. Pure price tests optimize for conversion or revenue, but they don’t account for margin variation across products or customer acquisition costs across segments. Pricing research can be designed to optimize for profit, not just revenue.

Point to the technical limitations.

Most A/B testing platforms can’t reliably execute pure price tests anyway. You’d need to implement complex technical workarounds that introduce their own risks. Pricing research is straightforward to implement with existing survey tools.

Pricing strategy

The urge to price test is understandable. You want data-driven pricing decisions. You want to optimize this critical lever for growth.

But the best data doesn’t come from exposing real customers to different prices in an A/B test. It comes from structured research that reveals customer psychology, value perception, and willingness to pay without the ethical complications.

What you can do is test discount codes or promotional messaging. For example, we ran a test for a client where some visitors saw “$50 free shipping minimum,” while others saw “$75 free shipping minimum” or “$100 free shipping minimum.” In reality, everyone had a $50 minimum on the backend, but the messaging encouraged different customer segments to add more to their carts. This isn’t pure price testing; it’s messaging optimization that influences average order value.

We’ve spent 16 years helping ecommerce and SaaS companies optimize their digital experiences. When it comes to pricing, though, the most successful companies skip the shortcuts and invest in research that protects customer trust while delivering the insights they need.

The next time someone proposes A/B testing prices, ask them if they’ve considered the alternatives. The answer might surprise them and save your brand from an expensive mistake.

Let’s talk about your pricing strategy.

The post How to Test Your Pricing Strategy Without the Ethical Minefield appeared first on The Good.

]]>
How Do You Reduce Cancellations During SaaS Free Trials? https://thegood.com/insights/trial-optimization/ Fri, 05 Dec 2025 20:57:52 +0000 https://thegood.com/?post_type=insights&p=111216 Leaders often assume users cancel because the product isn’t good enough. The reality is more nuanced. Users rarely cancel because your product lacks value. They cancel because they didn’t experience that value quickly enough, clearly enough, or in a way that made sense for their specific needs. The stakes are high. According to recent industry […]

The post How Do You Reduce Cancellations During SaaS Free Trials? appeared first on The Good.

]]>
Leaders often assume users cancel because the product isn’t good enough. The reality is more nuanced. Users rarely cancel because your product lacks value. They cancel because they didn’t experience that value quickly enough, clearly enough, or in a way that made sense for their specific needs.

The stakes are high. According to recent industry data, the average SaaS free trial converts less than 25% of users to paying customers. That means roughly two-thirds to three-quarters of your trial users are walking away without ever becoming customers.

But the good news is that trial cancellations aren’t random. They follow patterns. Users drop off at predictable moments in their journey and struggle with the same features or tasks. Once you identify these patterns, you can systematically address them through trial optimization.

Understanding why trial optimization matters for reducing cancellations

Before diving into how to reduce cancellations, let’s be clear about what we mean by trial optimization and why it deserves your attention.

Trial optimization is the systematic process of improving every touchpoint in your free trial or freemium experience to increase the likelihood that users will see value, engage consistently, and ultimately convert to paying customers. It’s not about manipulation or dark patterns. It’s about removing unnecessary friction, clarifying value, and helping users succeed with your product.

The impact of effective trial optimization extends beyond conversion rates. When you optimize the trial experience, you also reduce customer acquisition costs, improve customer lifetime value, and build a stronger foundation for retention.

Understanding your specific trial model is the first step toward optimization. Different trial structures create different challenges and opportunities.

What is a freemium model?

The freemium model offers perpetual access to a restricted version of your product, either by limiting features or placing caps on usage. Think Spotify’s free tier with ads, or Canva’s basic design tools. The challenge with freemium is that users can stay indefinitely without converting. Your optimization goal is building reliance while strategically gating features that create urgency to upgrade.

What is a reverse trial?

In a reverse trial, users start with full access to all features for a limited time, then get moved to a freemium plan with limited capabilities. This approach, coined by growth leader Elena Verna, prioritizes maximum value upfront. Users experience everything your product can do, making the subsequent feature restrictions feel more pronounced. Trial optimization here focuses on ensuring users activate on premium features during that full-access window.

What is trial with payment?

This model requires payment information up front for full product access during a limited period. Users are charged automatically after the trial unless they cancel. The friction of providing credit card details means fewer signups but typically higher conversion rates, with opt-out trials converting at 49-60% compared to opt-in trials at 18-25%. Optimization here balances making signup worthwhile despite the friction while ensuring the experience justifies the automatic charge.

Five steps to audit and optimize your trial experience

Trial optimization looks different in each of these trial models, but one thing is true across the board: reducing cancellations requires a systematic approach.

You can’t fix what you don’t measure, and you can’t optimize what you don’t understand.

Here is a summary of the five-step framework for auditing your trial experience. For a detailed walkthrough, including specific templates and decision trees, see our article on auditing free user experiences.

Step 1: Identify drop-off points with data analysis

Examine your product analytics to pinpoint exactly where users abandon their trial journey.

  • Track activation drop-offs in your onboarding flow
  • Monitor which features users engage with versus ignore
  • Calculate time-to-value and compare against churn timing
  • Segment data by acquisition channel, trial type, and user cohort
  • Layer in session recordings to see what users actually do before leaving

Step 2: Conduct user interviews to understand the “why”

Numbers show where users leave. Conversations reveal why.

  • Interview 10-15 users, split between active trial users and those who churned
  • Ask what value they found, what confused them, and what would make them pay
  • Listen for the exact language they use to describe their experience
  • Note any competitors or alternatives they mention for market context

Step 3: Benchmark your experience against market standards

Your users compare you to every tool they’ve used. Conduct some competitive analysis to gauge where you fall in the market.

  • Document how competitors structure their trial experiences
  • Screenshot monetization touchpoints, upgrade prompts, and limit notifications
  • Study products your users mention in interviews, even if indirect competitors
  • Identify where your experience creates more or less friction than market norms

Step 4: Map user actions with verb scoring

Break down every meaningful action in your product and score the friction required by running a verb scoring exercise.

  • List discrete actions users can take (create, share, export, invite, etc.)
  • Assign each a verb score from Anonymous to Gated
  • Look for inconsistencies in how similar actions are gated
  • Identify if you’re giving away too much or asking too soon

Step 5: Connect insights to create an optimization roadmap

Synthesize your findings to prioritize what to fix first.

  • Friction without reason: unnecessary barriers compared to competitors
  • Value leaks: popular free features that don’t drive conversion
  • Invisible gates: paywalls users hit without understanding why
  • Poorly timed friction: asking users to pay before they’ve seen value

Prioritize optimizations by impact (users affected), confidence (data supports it), effort (time to implement), and market alignment (are you an outlier).

Six strategies for reducing trial cancellations

Once you’ve audited your trial experience and identified optimization opportunities, you will have a clear roadmap for addressing issues.

Plenty of strategies might arise in your research. Here are a few themes we see often.

Accelerate time-to-first-value

The faster users experience value, the less likely they are to cancel. Industry benchmarks suggest that users should reach their first “aha moment” within 48 hours of signup.

Design your onboarding to guide users directly toward the action that delivers value. Use progress bars and checklists to create clear paths forward.

Remove any friction between signup and first value. If users need to integrate other tools, fill out profiles, or configure settings before experiencing core benefits, you’re creating opportunities for abandonment. Save non-essential setup for after users have seen value.

Provide personalized onboarding experiences

Companies using personalized experiences see conversion rates improve by up to 67%. Generic onboarding treats all users the same, but different user segments have different needs, different technical sophistication, and different use cases.

Segment users based on their role, company size, or stated goals during signup. A solo entrepreneur using your project management tool has different needs than a project manager at a 100-person company. Your onboarding should reflect these differences.

Use progressive disclosure to reveal features as they become relevant. Don’t overwhelm new users with every capability on day one. Instead, introduce advanced features once users have mastered the basics.

Implement strategic reminder systems

Trials between 7-14 days convert better than longer trials because they create urgency. But urgency only works if users remember they’re on a trial.

Send regular emails and in-app notifications informing users about remaining trial time. These reminders should do more than count down days. Each one should emphasize value, highlight features users haven’t explored, or address specific pain points.

Gate features strategically based on usage patterns

In our experience optimizing for SaaS, offering too many free features can actually hurt conversion rates. Users need to experience value from free features while simultaneously understanding what they’re missing from paid capabilities.

Place prompts for premium features adjacent to free ones. PDF Converter, for example, offers free file conversion but positions the premium, higher-quality option nearby. This ensures users understand the upgrade path without being pushy.

Use clear visual cues like lock icons, “Pro” badges, or color contrasts to differentiate free from paid features.

Provide proactive support during critical moments

Customer support engagement during trial periods can significantly boost conversion rates.

Don’t wait for users to ask for help. Implement triggered messages based on behavior patterns. If a user hasn’t logged in for three days, send a helpful email with tips. If someone tries to use a gated feature multiple times, offer a personalized demo or support call.

For high-value potential customers, consider human touchpoints. A quick call from customer success at day three of a 14-day trial can answer questions, provide personalized guidance, and significantly increase conversion likelihood.

Design thoughtful cancellation flows

Not every cancellation is preventable, but many are. When users attempt to cancel, use that moment as an opportunity to understand why and potentially offer alternatives.

Implement exit surveys that capture cancellation reasons. According to data on subscription churn, understanding why users leave is critical for preventing future cancellations. Are they leaving because of the price? Missing features? Poor onboarding? Bugs?

Based on cancellation reasons, offer segment-specific alternatives. If someone is canceling due to price, offer a discount or payment plan. If they barely used the product after the trial, extend the trial. If they’re leaving due to missing features, ask which features would keep them.

Common mistakes that increase trial cancellations

Even well-intentioned optimization efforts can backfire. Avoid these common mistakes that actually increase cancellation rates.

Making cancellation difficult

Some SaaS companies deliberately make cancellation difficult, requiring users to call or email rather than cancel with a simple click. This dark pattern might delay cancellations temporarily, but it can destroy trust and create negative word-of-mouth.

Make cancellation simple. The goal isn’t to trap users; it’s to create such a good experience that they don’t want to leave.

Gating core value too aggressively

If users can’t experience your product’s core value without upgrading, they’ll cancel before converting. The free version should deliver genuine utility while creating a desire for premium features.

Neglecting mobile trial experiences

With increasing mobile usage, trial experiences must work seamlessly across devices.

If your onboarding is desktop-optimized but breaks on mobile, you’re creating cancellations for a substantial user segment.

Sending generic email communications

Automated email sequences that ignore user behavior feel impersonal and often go unread. According to research on trial optimization, personalized communication based on user activity significantly outperforms generic campaigns.

If a user hasn’t logged in since signing up, an email about advanced features is irrelevant. If they’re actively using the product daily, countdown reminders may feel pushy. Segment communications based on engagement levels.

Trial optimization frequently asked questions

What’s the ideal trial length to minimize cancellations?

The optimal length depends on your product’s complexity and how quickly users can experience value. Simple products often perform better with 7-14 day trials that create urgency.

Complex B2B tools may need 30-60 days for users to properly evaluate capabilities. If you are completely lost, start with 14 days and adjust based on your activation data and time-to-value metrics.

Should I require a credit card for trial signup?

This decision significantly impacts both signup volume and conversion rates.

Opt-out trials (credit card required) convert higher but generate fewer signups. Opt-in trials (no credit card) convert lower but attract more users.

The right choice depends on whether you prioritize higher conversion rates per trial or a larger volume of trials and how much more utility the full tier offers versus a free trial.

Most product-led companies start with opt-in trials to maximize exposure, then consider opt-out trials once they’ve optimized the trial experience.

How can I tell if my trial cancellations are normal or problematic?

Track cohort-specific metrics. If certain user segments, acquisition channels, or trial lengths show notably different cancellation patterns, those differences reveal opportunities for targeted optimization.

What’s the most important metric to track for trial optimization?

While trial-to-paid conversion rate matters, activation rate is often more predictive.

Activation measures whether users complete key actions that indicate they’ve experienced value. Research shows users who reach activation are significantly more likely to convert.

Define your activation criteria based on behaviors that correlate with conversion, then optimize to increase the percentage of trial users who activate.

How often should I test and iterate on my trial experience?

Trial optimization is continuous, not a one-time project.

High-performing SaaS companies test constantly. Start with your highest-impact opportunities identified during your audit, then implement a regular testing cadence.

Track results for statistical significance before making changes permanent. Plan quarterly reviews of your trial metrics to identify new optimization opportunities as your product and market evolve.

Can I reduce trial cancellations without changing my product?

Yes. Many cancellations stem from poor onboarding, unclear value communication, or inadequate support rather than product deficiencies.

You can significantly reduce cancellations by improving onboarding sequences, providing better in-app guidance, personalizing the trial experience, implementing proactive support, and strategically positioning upgrade prompts.

That said, if users consistently cancel, citing missing features or bugs, product improvements may be necessary alongside trial optimization.

Build a systematic approach to trial optimization

Reducing SaaS trial cancellations isn’t about quick fixes or growth hacks. It requires systematic analysis of your trial experience, a deep understanding of user behavior and needs, and continuous optimization based on data.

The five-step audit framework provides a structured approach: analyze data to find drop-off points, interview users to understand why they leave, benchmark against market expectations, map actions with verb scoring, and synthesize insights into a prioritized roadmap. Each step builds on the previous one to create a picture of optimization opportunities.

Implementation matters as much as analysis. Accelerate time-to-value, personalize onboarding, implement strategic reminders, gate features based on usage patterns, provide proactive support, and design thoughtful cancellation flows. These six strategies address the most common causes of trial cancellations, but keep in mind that your analysis will likely surface other unique issues.

Most importantly, treat trial optimization as an ongoing discipline rather than a one-time project. User expectations evolve, competitors improve their experiences, and your product adds features. Regular review and iteration ensure your trial experience continues performing as your business grows.

At The Good, we’ve helped SaaS companies reduce trial cancellations and improve conversion rates through our Digital Experience Optimization Program™. We conduct comprehensive audits using heatmaps, session recordings, and user research to identify exactly where trial users encounter friction. Then we build custom optimization roadmaps and validate improvements through experimentation.

Ready to reduce your trial cancellations and accelerate growth? Schedule an introductory call to discuss how we can optimize your trial experience for better conversion and retention.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

The post How Do You Reduce Cancellations During SaaS Free Trials? appeared first on The Good.

]]>
MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust https://thegood.com/insights/maxdiff-analysis/ Wed, 26 Nov 2025 17:56:30 +0000 https://thegood.com/?post_type=insights&p=111202 When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like […]

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like ours,” and “How do I know this won’t be another failed implementation?”

The company had assembled a list of proof points they could showcase on their homepage: years in business, number of integrations, customer counts, implementation guarantees, security certifications, industry awards, analyst recognition, and more. But they only had space to highlight four of these benefits prominently below their hero section. They faced the classic messaging dilemma: which trust signals would actually move the needle with prospects evaluating B2B software?

This is where MaxDiff analysis becomes valuable. Instead of relying on stakeholder opinions or generic best practices, we could let their target buyers vote with data on what mattered most.

What makes MaxDiff analysis different from other survey methods

MaxDiff analysis (short for Maximum Difference Scaling) is a research methodology that forces trade-offs. Rather than asking people to rate items individually on a scale, MaxDiff presents sets of options and asks participants to identify the most and least important items in each set. This forced-choice format reveals true preferences because people can’t rate everything as “very important.”

Here’s why this matters: traditional rating scales often produce compressed results where everything scores high. When you ask customers, “How important is X on a scale of 1-10?” most people will hover around 7 or 8 for anything remotely relevant. You end up with a spreadsheet full of similar numbers and no clear direction.

MaxDiff cuts through that noise. By repeatedly asking “which of these five options matters most to you, and which matters least?” across different combinations, you build a statistical picture of relative importance. The math behind MaxDiff generates a best-worst score for each item, showing not just which options are preferred, but by how much.

For digital experience optimization, this methodology is particularly useful when you need to prioritize limited real estate on a website, determine which features to build first, or figure out which messaging will actually differentiate your brand.

How we structured the MaxDiff study for maximum insight

In the project for our client, we started by defining the target audience precisely. The company was a B2B SaaS platform serving mid-market operations teams, so we recruited 60 participants who matched their customer profile: director-level or above at companies with 50-500 employees, working in operations or supply chain roles, currently using at least two SaaS tools in their workflow, and actively evaluating solutions within the past six months.

From the initial audit and stakeholder interviews, we identified 11 potential trust signals the company could emphasize on its homepage. These included things like:

  • Concrete numbers (customer counts, uptime percentages, integrations available)
  • Credentials (security certifications, enterprise clients)
  • Promises (implementation timelines, support response times, money-back guarantees)
  • And more

Each represented something the company could truthfully claim, but we needed to know which ones would build the most trust with prospects evaluating the platform.

The survey design was straightforward. Each participant saw these 11 benefits randomized into multiple sets of five items. For each set, they selected the most important factor and the least important factor when considering whether to adopt this type of software. Participants completed several rounds of these comparisons, seeing different combinations each time.

This approach gave us enough data points to calculate a robust best-worst score for each benefit: the number of times it was selected as “most important” minus the number of times it was selected as “least important.” Positive scores indicate a strong preference, negative scores indicate a low importance, and the magnitude of the scores shows the strength of feeling.

The results revealed a clear hierarchy of trust signals

When we analyzed the MaxDiff results, the pattern was striking. The top-scoring benefits shared a common theme: they provided concrete evidence of proven reliability and satisfied users. The bottom-scoring benefits? They emphasized company scale and marketing visibility.

A chart showing the ranking of MaxDiff analysis SaaS trust signals.

The four highest-scoring trust signals were clear winners. G2 or Capterra ratings scored 38 points (the highest possible), indicating this was nearly universal in its importance. The number of active customers scored 30 points. An implementation guarantee (“live in 30 days or your money back”) scored 25 points. And SOC 2 Type II certification scored 16 points.

These weren’t arbitrary marketing metrics. They were the specific signals that would make someone think, “this platform delivers real value and other companies trust them.”

The middle tier included operational details that registered as minor positives but weren’t decisive: the number of successful implementations (7 points), availability of 24/7 support (6 points). These signals suggested competence but didn’t particularly move the needle on trust.

Then came the surprises. Years in business scored -5 points, indicating it was slightly more often selected as “least important” than “most important.” The number of integrations available scored -11 points. AI-powered features claimed scored -15 points. Employee headcount scored -36 points. And recognition as a Gartner Cool Vendor scored -55 points, the lowest possible score.

Think about what prospects were telling us: “I don’t care that you have 200 employees or that Gartner mentioned you. Show me that real companies like mine trust you and that you’ll actually deliver on your promises.”

Why customers rejected company-focused metrics

The findings revealed an insight into trust-building that extends beyond this single company. B2B buyers weigh social proof and reliability guarantees far more heavily than they weigh indicators of company scale or industry recognition.

When a business talks about its employee headcount or analyst mentions, prospects interpret this as the company talking about itself. These metrics answer the question “How big is your business?” but not “Will this solve my problem?” From the buyer’s perspective, a larger team or Gartner mention doesn’t necessarily correlate with better software or smoother implementation.

By contrast, user reviews and customer counts answer the implicit question every prospect has: “Did this work for companies like mine?” A guarantee directly addresses risk: “What happens if implementation fails?” Security certifications address legitimacy: “Is this platform secure enough for our data?”

The AI-powered features claim scored poorly, likely because it felt trendy rather than practical. Prospects for this specific business weren’t primarily concerned about cutting-edge technology; they wanted a platform that would reliably solve their workflow problems. Leading with an AI angle, while possibly true, didn’t address the core decision-making criteria.

Years in business scored negatively for similar reasons. While longevity can signal stability, in this context, it didn’t address the prospect’s immediate concerns about implementation speed and user adoption. A company could be around for years while providing clunky software with poor support.

From insight to implementation: turning research into revenue

The MaxDiff analysis gave the company a clear action plan. We recommended implementing a four-part trust signal section directly below their homepage hero, featuring the top four scoring benefits in order of importance.

This meant reworking their existing homepage structure. Previously, they had emphasized their implementation guarantee in the hero area while burying customer counts and ratings further down the page. The research showed this approach had it backward. Prospects needed to see evidence of customer satisfaction first, then the implementation guarantee as additional reassurance.

We also recommended removing or de-emphasizing several elements they had been proud of. The employee headcount mention, the Gartner recognition, and several other low-scoring items were either removed entirely or moved to less prominent positions on the site. The goal was to prevent low-value signals from crowding out high-value ones.

The broader lesson here extends beyond this single homepage optimization. The MaxDiff results provided a messaging hierarchy that the company could apply across its entire go-to-market strategy. Email campaigns, landing pages, sales conversations, demo decks, and even their LinkedIn company page could now emphasize the trust signals that actually mattered to prospects.

When MaxDiff analysis makes sense for your business

MaxDiff is particularly valuable when you’re facing a prioritization problem with limited data. It works best in these scenarios:

  • You have more options than you can implement. Whether that’s features to build, benefits to highlight, or messages to test, MaxDiff helps you choose wisely when you can’t do everything at once.
  • Stakeholder opinions are conflicting. When internal debates about priorities can’t be resolved through argument, customer data settles the question. MaxDiff provides quantitative evidence for decision-making.
  • You need to differentiate in a crowded market. If competitors are all saying similar things, MaxDiff reveals which specific claims will break through. Often, the winning messages are ones companies overlook because they seem “obvious” or “not unique enough.”
  • You’re optimizing for a specific audience segment. Generic research about “customers in general” often produces generic insights. MaxDiff works best when you recruit participants who precisely match your target customer profile.

The methodology has limitations worth noting. It requires careful setup, and you need to know which options to test before you start.

If you don’t include the right benefits in your initial list, you won’t discover them through MaxDiff.

It also works best with a reasonably sized set of options (typically 5-15 items).

And the results tell you about relative importance, not absolute importance; everything could theoretically matter, but MaxDiff reveals the hierarchy.

How to use MaxDiff findings in your optimization strategy

Once you have MaxDiff results, the application extends beyond simply reordering homepage elements. The insights should inform your entire digital experience.

Your messaging architecture should reflect the importance hierarchy. High-scoring benefits deserve prominent placement, repetition across pages, and detailed explanation. Low-scoring benefits can either be removed or repositioned as supporting rather than leading messages.

Your testing roadmap should prioritize changes based on MaxDiff findings. If customer reviews scored highest in your study, test different ways of showcasing reviews before you test other elements. Let the data guide your experimentation priorities.

Your content strategy should emphasize what customers care about. If service guarantees scored highly, create content that explains the guarantee in detail, shares stories of when it was honored, and addresses common concerns. Build your editorial calendar around the topics MaxDiff revealed as important.

Your sales enablement should align with customer priorities. If the research showed that prospects value licensing credentials, make sure your sales team knows to emphasize this early in conversations. Create collateral that highlights the trust signals that matter most.

The most effective companies use MaxDiff as one tool in a broader research program. They combine it with qualitative research to understand why certain benefits matter, behavioral analytics to see how users interact with different messages, and continuous testing to validate that the predicted preferences translate into actual conversion improvements.

Turning guesswork into growth

The SaaS company we worked with started with a dozen possible messages and no clear sense of which would build trust most effectively with B2B buyers. After the MaxDiff analysis, they had a data-backed hierarchy that let them confidently restructure their homepage and broader messaging strategy.

This is the power of asking prospects the right questions in the right way. Not “do you like this?” which produces inflated scores for everything. Not “rank these 11 items,” which overwhelms participants and produces unreliable data. But rather, through repeated forced choices, revealing the true importance of each element.

If you’re struggling with similar prioritization challenges (too many options, limited space, stakeholder disagreement about what matters), MaxDiff analysis might be the tool that breaks through the noise. It transforms subjective opinion into statistical evidence, letting your prospects vote on what will actually convince them to choose your platform.

Ready to discover which messages actually resonate with your customers? The Good’s Digital Experience Optimization Program™ includes research methodologies like MaxDiff analysis to help you prioritize changes based on real customer preferences, not guesswork.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company https://thegood.com/insights/intent-based-segmentation/ Fri, 21 Nov 2025 18:45:09 +0000 https://thegood.com/?post_type=insights&p=111181 Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do. The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different […]

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do.

The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different goals, workflows, and definitions of success.

We recently worked with a leading enterprise SaaS company facing exactly this challenge. Their product served everyone from solo creators to enterprise teams, but the experience treated everyone the same.

Users logging in to track a single client project got bombarded with team collaboration features. Power users managing complex workflows couldn’t find advanced capabilities buried in generic menus.

Users felt overwhelmed by irrelevant features, engagement plateaued, and retention suffered.

The solution wasn’t another traditional segmentation model. It was intent-based segmentation; a framework for grouping users by what they’re actually trying to accomplish, then personalizing the experience to match those specific goals.

Understanding behavioral segmentation and where intent fits

Before diving into how to build intent-based clusters, it helps to understand where this approach fits within the broader landscape of user segmentation.

Most SaaS companies are familiar with demographic segmentation (company size, industry, role) and firmographic segmentation (ARR, team size, geographic location). These approaches tell you who your users are, but they don’t tell you what they’re trying to do, which is why they often fail to predict engagement or retention.

Behavioral segmentation takes a different approach. Instead of looking at static attributes, it focuses on how users actually interact with your product.

Behavioral segmentation divides users based on engagement. This includes actions like feature usage patterns, open frequency, purchase behavior, and time-to-value metrics.

While not the only way to do things, behavioral segmentation is widely regarded as more effective than demographic segmentation alone, with plenty of research backing it up.

Within behavioral segmentation, there are several approaches:

  • Usage-based segmentation looks at frequency and intensity of use.
  • Lifecycle segmentation tracks where users are in their journey.
  • Benefit-sought segmentation groups users by the outcomes they want to achieve.

Intent-based segmentation sits at the intersection of all three.

visual portraying intent based segmentation at the center of different types of user segmentation

It identifies clusters of users who share similar goals and workflows, then maps those patterns to create a more personalized experience.

Intent-based clusters answer the question: “What is this user trying to accomplish right now?”

In a recent client engagement that inspired this article, this distinction mattered. They had mountains of usage data showing which features people clicked, but no framework for understanding why certain feature combinations existed or what job users were trying to complete.

They knew “Business Professionals” used their tool, but that category was so broad it offered no actionable insights. A marketing manager building campaign timelines has completely different needs than a legal team tracking contract approvals, even though both might be classified as “business professionals.”

Intent-based clustering gave them that missing layer of insight.

Case study: How to spot the need for intent-based segmentation

Let’s talk more about the client engagement I mentioned. This is a great case study for when to use intent-based segmentation.

We work on a quarterly retainer for these clients with our on-demand growth research services. So, when they mentioned struggling with how to personalize experiences and improve retention, we opened up a research project that same day.

The team could see that certain users logged in daily and used five or more features. Great, right? Not really. When we dug deeper, we discovered something critical. Heavy feature usage didn’t predict retention. Some power users churned while casual users stuck around for years.

The issue wasn’t the quantity of features used; it was whether those features aligned with what users were trying to accomplish. A user coming in weekly to update a single client dashboard showed higher retention than someone exploring ten features that didn’t match their core workflow.

The symptoms: What teams told us was broken

During our stakeholder interviews, we heard the same frustrations across departments:

From product: “We know the top five features everyone uses, but that doesn’t help us understand why they’re using them or what to build next. Two users might both use our template feature, but one is building client proposals while the other is standardizing internal processes. They need completely different things from that feature.”

From marketing: “Our segments are too broad to be useful. ‘Business Professional’ could mean anyone from a solo consultant to an enterprise VP. When we send educational content, we can’t make it relevant because we don’t know what problem they’re trying to solve.”

From customer success: “We can see when someone is at risk of churning because their usage drops off, but we can’t predict it before it happens. By the time we notice, they’ve already decided the product isn’t right for them. We need to understand intent earlier so we can intervene proactively.”

From UX research: “Users think in terms of tasks, not features. They don’t say ‘I want to use the dependency mapping tool.’ They say, ‘I need to make sure the design team finishes before development starts.’ But our product talks about features, not outcomes.”

The underlying problem: Missing the ‘why’

What became clear was that the organization had plenty of data about behavior but no framework for understanding intent. They could answer questions like:

  • How many people use feature X?
  • What’s the average session duration?
  • Which users log in most frequently?

But they couldn’t answer the questions that actually mattered:

  • What are users trying to accomplish when they use feature X?
  • Why do some users stick around while others churn?
  • What combination of goals and workflows predicts long-term retention?

This may sound familiar. They have data about behavior but lack context about intent. Without understanding the users’ different definitions of success, they use generic personalization that recommends “similar features” and misses the mark entirely.

The four-phase framework for creating intent-based user clusters

Based on our work with the enterprise SaaS client, we developed a systematic framework for building intent-based clusters from scratch.

The process has four distinct phases, each building on the previous one.

Think of this as a directional guide rather than a rigid formula. You can adapt the scope based on your resources and organizational complexity.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Phase 1: Capture institutional knowledge and identify gaps

Most organizations have more customer insight than they realize; it’s just scattered across teams, buried in old reports, and locked in individual team members’ heads. The first phase consolidates that knowledge and identifies what’s missing.

Graphic of phase 1 in the intent based user segmentation process

1. Conduct cross-functional interviews

Start by interviewing stakeholders who interact with users regularly: product managers, customer success, sales, support, and marketing. For our client, we conducted interviews with seven team members from UX research, growth, engagement, marketing, and analytics.

The goal isn’t consensus. You want to uncover patterns and contradictions instead. Focus your conversations on these questions:

  • How do you currently describe different user types?
  • What patterns have you noticed in how different users engage with the product?
  • What data exists about user behavior that isn’t being used to inform decisions?
  • Where do personalization efforts break down today?
  • What questions about users keep you up at night?

These conversations surface institutional knowledge that never makes it into documentation.

Capture everything. The contradictions are especially valuable—they reveal where teams operate with different mental models of the same users.

2. Audit existing research and reports

Next, gather relevant user research, data analysis, and customer insight. For our client, we analyzed eight existing reports, including retention data, cancellation surveys, user studies, engagement patterns, and conversion analysis.

Look for:

  • Marketing segmentation models (usually demographic-heavy)
  • User research studies (often small sample, rich insights)
  • Behavioral analytics reports (feature usage patterns, cohort analysis)
  • Customer journey maps (theoretical vs. actual paths)
  • Support ticket analysis (pain points and use cases)
  • Cancellation surveys (why users leave)

Pay attention to gaps between how different teams think about users. Marketing might segment by company size, while product segments by feature usage. Neither is wrong, but the lack of a unified framework means teams optimize for different definitions of success.

3. Define hypotheses about intent-based variables

Based on interviews and research, develop hypotheses about what variables might define user intent. This is where you move from “who uses our product” to “what are they trying to accomplish.”

For our client, we identified several dimensions that seemed to correlate with intent:

  • Primary workflow type: Are users managing team projects, client deliverables, or personal tasks?
  • Collaboration patterns: Solo work, small team coordination, or cross-functional orchestration?
  • Usage frequency: Daily operational tool or periodic project management?
  • Success metrics: Speed (quick task completion) vs. thoroughness (detailed planning and tracking)?
  • Document complexity: Simple task lists or multi-layered project hierarchies?

The goal is to create testable hypotheses that can be validated in Phase 2.

Phase 2: Validate clusters through user research and behavioral analysis

Once you have initial hypotheses, Phase 2 tests them against real user behavior and feedback. This is where hunches become data-backed insights.

visual of phase 2 in the intent based user segmentation process

1. Develop provisional cluster groups

Transform your hypotheses into provisional clusters. For our client, we identified six distinct intent-based clusters. Let me illustrate with a fictional example of how this might work for a project management SaaS tool:

Sprint Executors: Users focused on rapid task completion and daily standup workflows. They need speed, simple task updates, and quick team coordination. Think startup teams moving fast with lightweight processes.

Client Project Coordinators: Users managing multiple client engagements simultaneously with strict deliverable timelines. They need client visibility controls, progress tracking, and professional reporting. Think agencies and consultancies.

Cross-Functional Orchestrators: Users coordinating complex projects across departments with dependencies and approval workflows. They need Gantt views, resource allocation, and stakeholder communication tools. Think enterprise program managers.

Personal Productivity Optimizers: Users treat the tool as their second brain for personal task management and goal tracking. They need customization, recurring tasks, and minimal collaboration features. Think solopreneurs and executives.

Seasonal Campaign Managers: Users with predictable high-intensity periods followed by dormancy. They need templates, bulk operations, and the ability to archive/reactivate projects easily. Think retail operations teams or event planners.

Mobile-First Coordinators: Users who primarily access the tool from mobile devices for field work or on-the-go updates. They need streamlined mobile experiences and offline sync. Think field service teams or traveling consultants.

Each cluster gets a descriptive name that captures the user’s primary intent, not just their behavior. “Sprint Executor” tells you more about what someone is trying to do than “high-frequency user.”

2. Conduct targeted user research

With provisional clusters defined, recruit users who fit each profile and conduct interviews to understand:

  • Their primary use cases and goals when they first adopted the tool
  • How they discovered and currently use the product
  • Their typical workflows from start to finish
  • What defines success in their role
  • Pain points and unmet needs
  • How they decide which features to explore
  • What would make them cancel vs. what keeps them subscribed

For our client, we conducted three to four interviews per cluster, totaling around 24 user conversations. This gave us enough coverage to validate patterns without drowning in data.

The insights were eye-opening. We discovered that one cluster had the fastest time-to-value but the lowest feature adoption. They found what they needed immediately and never explored further. Another cluster showed the highest retention but needed the longest onboarding. They invested time up front because the tool was critical to their workflow.

3. Analyze behavioral data to confirm patterns

User interviews reveal what people say they do. Behavioral data shows what they actually do. Cross-reference your clusters against:

  • Feature usage sequences (which tools appear together in sessions)
  • Time-to-value metrics by cluster (how quickly do they get their first win)
  • Retention and churn patterns (which clusters stick around)
  • Upgrade and expansion behavior (which clusters grow their usage)
  • Support ticket themes (which clusters need help with what)
  • Feature adoption curves (how exploration patterns differ)

For our client, the data revealed critical differences. The “Sprint Executor” equivalent had fast initial adoption but plateaued quickly. They found their core workflow and stopped exploring.

The “Cross-Functional Orchestrator” cluster showed slow initial adoption but deep engagement over time. They needed to learn the tool thoroughly to unlock value.

These patterns weren’t visible in aggregate data. Only by segmenting users by intent could we see that different clusters had fundamentally different paths to retention.

4. Build detailed cluster profiles

For each validated cluster, create a comprehensive profile that becomes the foundation for personalization. For example:

Cluster name: Sprint Executors

Primary intent: Complete daily tasks quickly with minimal friction and maximum team visibility

Most-used features:

  • Quick-add task creation
  • Board view for visual sprint planning
  • Mobile app for on-the-go updates
  • Real-time team activity feed

Typical workflow patterns:

  • Morning standup with task assignments
  • Throughout the day: quick updates and status changes
  • End of day: marking tasks complete and planning tomorrow

Behavioral flags that identify this cluster:

  • Creates 5+ tasks in first week
  • Returns daily within first 14 days
  • Uses mobile app within first 7 days
  • Rarely uses advanced features like Gantt charts or dependencies

Retention drivers:

  • Speed of task completion
  • Team visibility and accountability
  • Mobile accessibility

Churn risks:

  • Tool feels too complex for simple needs
  • Feature bloat is making core actions harder to find
  • Forced upgrades to access speed-focused features

Personalization opportunities:

  • Streamlined onboarding focused on quick task creation
  • Mobile-first feature discovery
  • Templates for common sprint workflows
  • Integrations with communication tools

These profiles become the single source of truth that product, marketing, and customer success can all reference.

Phase 3: Develop indicators and personalization strategies

The final phase connects clusters to action. This is where the framework moves from insight to implementation.

1. Create behavioral flags for cluster identification

Most users won’t self-identify their intent at signup. You need to infer cluster membership from behavioral signals early in their journey. The key is identifying flags that appear within the first 7-14 days. It should be early enough to personalize the experience before users decide if the tool is right for them.

visual of phase 3 in the intent based user segmentation process

For reference, the “Sprint Executor” cluster in our fictional example:

  • Created 5+ tasks in first week
  • Logged in on 4+ separate days in first 14 days
  • Used mobile app within first 7 days
  • Board or list view used more than timeline/Gantt view (80%+ of sessions)
  • Invited at least one team member within first 10 days
  • Never explored advanced dependency features
  • Average session length under 10 minutes

Versus the “Client Project Coordinator” cluster:

  • Created 3+ separate projects within first week (indicating multiple clients)
  • Used folder or workspace organization features within first 5 days
  • Set up client-specific permissions or external sharing settings
  • Created custom views or reports within first 14 days
  • Longer average session times (20+ minutes per session)
  • Uses professional or client-specific terminology in project names
  • High usage of export or presentation features

The goal is to find the minimum viable signal set that reliably predicts cluster membership. Start with more flags and refine over time based on which actually correlate with long-term behavior.

One critical finding from our client work: early behavioral flags predicted retention better than demographic data.

A user who exhibited “Client Project Coordinator” behaviors in week one showed 40% higher 90-day retention than the average user, regardless of their company size or job title.

2. Map personalization opportunities to each cluster

With clusters and flags defined, identify specific ways to personalize the experience across the user journey:

Onboarding sequences: Tailor the first-run experience to highlight features that match user intent. Show Sprint Executors how to set up their first sprint board, not the full feature catalog with Gantt charts and resource allocation tools they don’t need.

In-app messaging: Trigger contextual tips based on usage patterns. When a Client Project Coordinator creates their third project with similar structure, suggest project templates to save time.

Feature discovery: Recommend next-step features that align with cluster workflows. For Sprint Executors who’ve mastered basic task management, introduce the mobile app and integrations with their communication tools—not complex dependency mapping.

Content and education: Send targeted educational content that addresses cluster-specific goals. Client Project Coordinators get tips on professional reporting and client permissions. Sprint Executors get productivity hacks and team coordination strategies.

Upgrade paths: Present pricing tiers and feature upgrades that match cluster needs. Don’t push team collaboration features to Personal Productivity Optimizers who work solo and won’t use them.

Support prioritization: Route support tickets differently based on cluster. Client Project Coordinators might get priority support since they’re often managing paying clients. Seasonal Campaign Managers might get proactive check-ins before predicted busy periods.

For our client, this mapping revealed opportunities they’d completely missed. One cluster had been receiving generic “explore more features” emails when what they actually needed was advanced security capabilities for compliance requirements. Another cluster kept churning at the end of trial because onboarding emphasized features they’d never use instead of the speed-focused tools that matched their workflow.

Phase 4: Develop test concepts to validate impact

Turn personalization opportunities into testable hypotheses. Don’t roll everything out at once. Start with high-impact, low-effort tests that prove the value of intent-based segmentation.

For our client, we proposed several test concepts structured to validate clusters quickly and build organizational confidence in the framework. Here are a few examples.

visual of phase 4 in the intent based user segmentation process

Example Test 1: Intent-Based Onboarding Survey

Background: The organization lacked a way to identify user intent at the critical moment when users were most open to guidance: right after signup, but before they’d formed opinions about product fit.

Hypothesis: By asking users to self-identify their primary goal during their first meaningful session, we can segment them into actionable clusters that enable personalized feature discovery and messaging, resulting in improved 3-month retention rates by 5-10%.

Test design: During the first session (after initial signup but before deep engagement), present a brief survey asking: “What brings you here today?” with options that map directly to identified clusters:

☐ Coordinate my team’s daily work (Sprint Executors)

☐ Manage multiple client projects (Client Project Coordinators)

☐ Organize complex cross-functional initiatives (Cross-Functional Orchestrators)

☐ Track my personal tasks and goals (Personal Productivity Optimizers)

☐ Plan seasonal campaigns or events (Seasonal Campaign Managers)

☐ Update projects while on the go (Mobile-First Coordinators)

☐ Something else (with optional text field)

Then immediately personalize their first experience based on their response: Sprint Executors see a streamlined task creation tutorial, Client Project Coordinators get guidance on setting up client workspaces, etc.

Success metrics:

  • Primary: 3-month retention rate by selected cluster (looking for 5-10% lift)
  • Secondary: Survey completion rate (target: >80%), feature adoption aligned with cluster (target: 20% lift), time to first value-generating action
  • Guardrails: No negative impact on day 2 or day 7 retention

Acceptance criteria for “winning test”:

  • Survey completion rate >80%
  • 60% of users select a pre-set option (vs. “something else”)
  • Statistically significant retention lift in at least one cluster
  • No degradation in key engagement metrics

Acceptance criteria for “learning test”:

  • 40% of users select “something else” (suggests clusters don’t match user mental models)
  • No statistically significant difference in retention (suggests clusters exist, but personalization approach needs refinement)

Audience: New paid subscribers on first day, trial users converting to paid, reactivated users returning after 30+ days dormant. Start with 25% of eligible users to minimize risk.

Timeline: 90 days to measure primary retention metric, but early signals (completion rate, feature adoption) available within 14 days.

Example Test 2: Cluster-Specific Feature Recommendations

Background: Generic in-app messaging had low click-through rates (<5%) and wasn’t driving feature adoption. Users felt bombarded by irrelevant suggestions.

Hypothesis: For users who match behavioral flags within the first 14 days, triggering personalized feature recommendations will increase feature adoption by 20% and engagement depth by 15%.

Test design: Identify users by behavioral flags, then trigger targeted in-app messages at contextually relevant moments:

  • Sprint Executors see mobile app download prompt after completing 5 tasks on desktop: “Update tasks on the go: get the mobile app”
  • Client Project Coordinators see reporting features after creating third project: “Impress clients with professional progress reports”
  • Cross-Functional Orchestrators see dependency mapping after creating complex project: “Map dependencies to keep cross-functional teams aligned”

Success metrics:

  • Primary: Feature adoption rate for recommended features (target: 20% lift vs. control)
  • Secondary: Overall engagement depth (features used per session), message click-through rate
  • Guardrails: No increase in feature abandonment (starting but not completing flows)

Audience: Users who match cluster behavioral flags within first 14 days. Test one cluster at a time to isolate impact.

Timeline: 30 days to measure feature adoption impact.

Example Test 3: Retention Email Campaigns by Cluster

Background: Generic “tips and tricks” email campaigns had 8% open rates and weren’t moving retention metrics. Content felt irrelevant to most recipients.

Hypothesis: Segmenting email campaigns by identified cluster will improve email engagement by 50% and show a measurable correlation with retention.

Test design: Replace generic weekly tips emails with cluster-specific content:

  • Sprint Executors: “5 ways to speed up your daily standup” / “Mobile shortcuts that save 2 hours per week”
  • Client Project Coordinators: “How to impress clients with professional project reports” / “3 ways to give clients visibility without overwhelming them”
  • Personal Productivity Optimizers: “Build your second brain: advanced filtering techniques” / “Automate your recurring tasks in 5 minutes”

Send to users identified through either the onboarding survey or behavioral flags. Track engagement and retention by cluster.

Success metrics:

  • Primary: Email open rates (target: 50% lift), click-through rates (target: 100% lift)
  • Secondary: Correlation between email engagement and 90-day retention
  • Guardrails: Unsubscribe rates remain stable or decrease

Audience: Users identified as belonging to specific clusters either through survey responses or behavioral flags, minimum 14 days after signup.

Timeline: 6 weeks for initial engagement metrics, 90 days for retention correlation.

Post-test analysis framework

For each test, we established a clear decision framework:

If “winning test”:

  • Roll out to 100% of eligible users
  • Begin development on next phase of personalization for that cluster
  • Use learnings to inform tests for other clusters
  • Document what worked to build organizational playbook

If “learning test”:

  • Analyze all “something else” responses for missing clusters or unclear framing
  • Review behavioral data to see if clusters exist, but personalization was wrong
  • Iterate on messaging, timing, or format
  • Decide whether to retest with refinements or try different approach

If negative impact:

  • Immediately roll back to the control experience
  • Conduct user interviews to understand what went wrong
  • Reassess cluster definitions or personalization approach
  • Consider whether the cluster exists but needs a different treatment

The key to successful testing is starting small, measuring rigorously, and being willing to learn from failures. Not every cluster will respond to every type of personalization, and that’s valuable information. The goal isn’t perfect personalization immediately; it’s continuous improvement based on what actually moves metrics.

Intent-based segmentation mistakes and how to avoid them

Based on our experience implementing this framework, here are the mistakes that will derail your efforts:

1. Starting with too many clusters

More isn’t better. Six well-defined clusters are more useful than fifteen overlapping ones. You need enough clusters to capture meaningfully different intents, but few enough that teams can actually remember and act on them. Start with 4-6 clusters and refine over time. If you find yourself creating clusters that differ only slightly, you’ve gone too granular.

2. Confusing demographics with intent

Job title, company size, or industry might correlate with intent, but they don’t define it. We’ve seen solo consultants behave like “Cross-Functional Orchestrators” and enterprise teams behave like “Sprint Executors.” Focus on what users are trying to accomplish, not who they are on paper.

3. Creating overlapping clusters

Each cluster should be distinct in its primary intent and workflow patterns. If you’re struggling to articulate how two clusters differ behaviorally, they’re probably the same cluster with different labels. Test this by asking: “If I saw someone’s usage data, could I confidently assign them to one cluster?”

4. Ignoring edge cases entirely

Some users will span multiple clusters or switch between them based on context. That’s fine. The framework should accommodate primary intent while recognizing that users are complex. A user might primarily be a “Client Project Coordinator” but occasionally use “Personal Productivity Optimizer” features for their own task management. Don’t force rigid categorization.

5. Skipping the validation step

Your initial hypotheses will be wrong in places. User research and behavioral data keep you honest and prevent confirmation bias. We’ve seen teams fall in love with theoretically elegant clusters that don’t actually exist in their user base, or miss entire segments because they didn’t fit the initial hypothesis.

6. Treating clusters as static

User intent evolves. Someone might start as a “Personal Productivity Optimizer” and grow into a “Client Project Coordinator” as their business scales. Review and refine your clusters quarterly based on new data, product changes, and market shifts.

7. Personalizing too aggressively too soon

Start with high-confidence, low-risk personalization (like targeted email content) before you completely diverge user experiences. You want to validate that clusters behave differently before you build entirely separate onboarding flows.

8. Forgetting to measure impact

Intent-based segmentation is valuable only if it improves outcomes. Define success metrics upfront (e.g., retention lifts, engagement depth, upgrade rates, support ticket reduction) and track them by cluster. If personalization isn’t moving these metrics, refine your approach.

Making intent-based segmentation work for your organization

The framework we’ve outlined works across product categories and company sizes, but implementation varies based on your resources and organizational maturity.

If you have limited data: Start with Phase 1 and Phase 2, using qualitative research to define clusters before investing in behavioral infrastructure. You can manually tag users based on interview responses and onboarding surveys, then personalize through targeted emails and customer success outreach. As you grow, build the data systems to automate cluster identification.

If you have rich behavioral data but limited research capabilities: Reverse the order. Start with data patterns and validate through targeted interviews. Look for natural groupings in your analytics that suggest different workflow types, then talk to representative users from each group to understand their intent.

If you’re a small team: Don’t let perfect be the enemy of good. Start with 3-4 obvious clusters based on your highest-level workflow differences. The founder of a 10-person startup probably has a better intuitive understanding of user intent than a 500-person company with siloed data. Write down what you know, test it with a few users, and start personalizing.

If you’re a large enterprise: The challenge is getting organizational alignment, not defining clusters. Use Phase 1 to surface where teams already operate with different mental models, then use data to arbitrate. Create executive sponsorship for the new framework so it becomes the shared language across product, marketing, and CS.

The key is starting somewhere. Most companies know their one-size-fits-all approach isn’t working, but they keep personalizing around the wrong variables that don’t actually predict what users are trying to accomplish.

Intent-based segmentation reorients everything around the question that actually matters: What is this user trying to accomplish, and how can we help them succeed at that specific goal?

Turn insights into retention that drives revenue

Understanding user intent is just the first step. The real value comes from translating those insights into personalized experiences that keep users engaged and drive measurable revenue growth.

At The Good, we’ve spent 16 years helping SaaS companies identify their most valuable user segments and optimize experiences around what actually drives retention. Our systematic approach to user segmentation goes beyond frameworks. We help you implement experimentation strategies that prove which personalization efforts move the needle on the metrics your board cares about.

Plenty of companies struggle to implement segmentation that’s actually actionable. They end up with beautiful personas gathering dust or broad categories that don’t inform product decisions.

Intent-based segmentation is different because it connects directly to behavior you can observe and experiences you can personalize.

If you’re struggling with generic experiences that fail to resonate with different user types, or if you know your segmentation could be better but aren’t sure where to start, let’s talk about how intent-based segmentation could transform your retention strategy and drive revenue growth.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience https://thegood.com/insights/why-are-free-users-churning/ Thu, 16 Oct 2025 20:56:17 +0000 https://thegood.com/?post_type=insights&p=110962 “My free users aren’t converting, where do I start?” If you’re asking this question, you’re already ahead of most product leaders. You recognize the problem. But here’s what many miss: conversion is a symptom, not the root cause of the problem. SaaS churn often happens before users ever consider paying. It’s common for users to […]

The post Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience appeared first on The Good.

]]>
“My free users aren’t converting, where do I start?”

If you’re asking this question, you’re already ahead of most product leaders. You recognize the problem. But here’s what many miss: conversion is a symptom, not the root cause of the problem.

SaaS churn often happens before users ever consider paying.

It’s common for users to hit friction points you didn’t know existed. They encounter gates that make no sense in context. They drop off at moments when just a bit more clarity could have kept them engaged.

The good news? You can fix this. But not by guessing. Not by copying what Dropbox or Notion does. And usually not by adding more features.

What you need is a systematic audit of your free or anonymous user experience. One that reveals exactly where users hit walls, why they bounce, and what you can do to keep them engaged long enough to see value.

This article walks through a five-step framework that SaaS product and growth leaders can use to audit their free experience and reduce churn. It’s the same approach we use with clients, adapted so you can run it internally. Fair warning: this takes work. But if you’re serious about improving SaaS user retention, it’s worth every hour.

Why your free experience impacts your retention rate

Before we get into the framework, let’s be clear about what we mean by “free experience.”

This includes any interaction where users engage with your product without paying. That could be a free trial, a freemium tier, anonymous tool usage, or limited feature access. It’s the first impression, the test drive, the “try before you buy” phase.

And it matters more than you think.

Most SaaS companies obsess over free-to-paid conversion rates. But conversion is a lagging indicator. By the time a user decides not to convert, the damage is already done. They disengaged days or weeks ago. They just didn’t tell you.

The real opportunity sits upstream. If you can identify and remove friction in the free experience, you don’t just improve conversion rates. You improve activation rates, engagement, time-to-value, and long-term retention. You build a user base that actually wants to pay because they’ve already seen the value.

Here’s how to find those friction points.

Step 1: Review your data for drop-off points

Start with what’s already happening in your product. Before you talk to anyone or look at competitors, you need to know exactly where users are getting stuck.

Dig into your product analytics. You’re looking for three things:

Activation drop-offs: Where do users abandon the onboarding flow? Which steps have the highest exit rates? If 60% of users drop off when asked to invite teammates, that’s a signal.

Feature engagement patterns: Which features do free users actually use? Which ones do they try once and never touch again? Are there features you’ve gated that users don’t even attempt to access?

Time-to-value analysis: How long does it take users to complete their first valuable action? And what percentage of users never get there? If your median time-to-value is three days, but 70% of users churn within 48 hours, you have a problem.

Set up a dashboard that tracks these metrics by cohort. New signups this week versus last month. Users from different acquisition channels. Free trial versus freemium. The patterns that emerge will guide your optimization priorities.

Layer on session recordings and heatmaps to see exactly what’s happening at key drop-off points. Numbers tell you where the problem is. Qualitative data tells you why.

Watch 20-30 sessions of users who churned in their first week. What did they try to do? Where did they get stuck? What confusion or frustration is evident in their behavior?

This isn’t just a data review. It’s detective work. You’re building a picture of where your free experience breaks down.

Step 2: Talk to users (both active and churned)

Now that you’ve identified drop-off points in your analytics, it’s time to understand the human story behind those numbers. Conduct 10-15 interviews, split between two groups:

Active free users (people still using your product but haven’t upgraded): Why are they still here? What value are they getting? What would make them pay? What’s holding them back?

Churned users (people who tried your product and left): What were they trying to accomplish? Where did they get stuck? What made them give up? What would have kept them engaged?

Keep these conversations short (15-20 minutes) and focused. You’re not selling. You’re learning.

Sample questions for active free users:

  • What problem were you trying to solve when you first signed up?
  • Walk me through how you use [product] today.
  • What features do you wish you had access to?
  • What would need to change for you to consider upgrading?
  • If we removed [specific free feature], would you still use the product?

Sample questions for churned users:

  • What were you hoping to accomplish with [product]?
  • Where did you get stuck?
  • Was there a specific moment when you decided it wasn’t for you?
  • Did you consider other tools? What made you choose them instead?

Record these conversations (with permission) and transcribe them. The exact language users employ to describe their experience reveals friction points you’d never spot in analytics alone.

Pay special attention when users mention alternatives they considered or are currently using. This context becomes critical in the next step.

Step 3: Map what your users are being offered in the market

You now understand what’s happening in your product and why users make the decisions they do. The next question is: what are they comparing you against?

Your users don’t evaluate your free experience in a vacuum. They’re weighing it against every other tool they’ve tried, every competitor they’re considering, and every product they wish yours worked more like.

This step isn’t about copying competitors. It’s about understanding the full landscape of options your users are navigating.

Create a comprehensive inventory of how other products in your space (and adjacent to it) handle their free experiences. Document what your users are seeing elsewhere.

Here’s what to capture in a Figma or Notion file.

An example from The Good showing what to capture in Figma when auditing SaaS tools and answering why are free users churning?

Set up a page with one row per product. For each one, document:

  • What features are available without registration
  • What requires an email address but remains free
  • Where the hard paywalls sit
  • How they communicate limits (countdown timers, credit displays, etc.)
  • Placement and messaging of upgrade prompts
  • Onboarding flows and activation sequences

Don’t limit yourself to direct competitors. Look at the tools your users mentioned in interviews. If they’re comparing your productivity tool to Notion, your design tool to Figma, or your automation platform to Zapier, study how those products handle free users.

Pro tip: Screenshot everything. Your database should include visual documentation of every monetization touchpoint, limit notification, and upgrade CTA. These screenshots become invaluable references when you’re making decisions about your own experience.

This exercise typically takes 8-12 hours for a thorough analysis of five to seven products. You’ll surface approaches you hadn’t considered and identify industry patterns that users have come to expect.

The goal here is context. When a user hits a limit in your product, they’re mentally comparing that experience to how Dropbox handles storage limits, how Canva displays upgrade options, or how Grammarly shows premium features. Understanding those reference points helps you design a free experience that meets or exceeds market expectations.

Step 4: Run a verb scoring exercise

With data, user insights, and market context in hand, it’s time to systematically evaluate your own product’s free experience. This is where verb scoring comes in.

Verb scoring evaluates the discrete actions users can take in your product and assigns each one a “score” based on the level of friction required. The six verb scores are:

  • Anonymous – Users can take this action without providing any information
  • Limited Anonymous Use – Users can take this action without registration, but only a limited number of times
  • Free with Registration – Users must register (email + basic info), but can take this action unlimited times for free
  • Limited Registered Use – Registered users can take this action, but with caps or restrictions
  • Trial with Payment – Users must provide payment information to access this action (even if they’re not charged immediately)
  • Gated – Only paying customers can take this action
A chart from The Good outlining verb score, definition and purpose.

List every meaningful action users can take in your product. Not features, but actions. “Create a document” is a verb. “Edit collaboratively” is a verb. “Export to PDF” is a verb. “Share via link” is a verb.

Then score each one. Where does it fall on the spectrum from Anonymous to Gated?

This exercise reveals your actual monetization strategy, not the one you think you have. You’ll often find that verbs are gated inconsistently, or that you’re giving away too much (or too little) at critical moments.

For a detailed walkthrough of verb scoring, including decision trees and examples, see our guide on verb scoring for product strategy.

Create a verb scoring matrix that maps all your verbs against these six scores. This becomes your baseline. It shows exactly where friction exists in your free experience, allowing you to compare it directly to what you documented in Step 3.

Step 5: Connect the dots between data, users, market context, and verb scoring

This is where the audit comes together. You now have four layers of insight:

  1. Quantitative and qualitative data: Where users drop off and what they’re doing (or not doing)
  2. User feedback: Why they drop off and what they’re thinking
  3. Market context: What alternatives they’re comparing you against
  4. Verb scoring matrix: Where friction exists in your own product

Lay them side by side. Look for patterns.

Here’s what you’re hunting for:

Friction without reason

Look out for verb scores that create unnecessary barriers relative to market norms. For example, if your data shows 40% of users bounce before registering, user interviews reveal confusion about what your product does, and your market analysis shows that competitors allow anonymous exploration, you’re likely losing users before they experience value. Your verb scoring can reveal that you’re gating too early.

Value leaks

Check for free features that users love but don’t move them toward conversion. If your most-used free features have no connection to paid capabilities, and users in interviews can’t articulate why they’d upgrade, you’re building a user base that will never pay. Your verb scoring might show you’re giving away too many “Free with Registration” verbs without strategic “Limited Registered Use” prompts.

Invisible gates

Paywalls that users hit without understanding why. Your data shows sudden drop-offs at specific upgrade prompts. User interviews reveal confusion about value or poor timing. Market analysis shows competitors explain premium benefits more clearly. Your verb scoring identifies which verbs are gated, but not whether those gates make sense to users.

Poorly timed friction

Limits or gates that appear before users have experienced enough value. Data shows high bounce rates at the first upgrade prompt. User interviews reveal frustration: “I hadn’t even figured out the basics yet.” Market analysis shows that similar tools delay friction until after activation. Your verb scoring might reveal that you’re using “Limited Anonymous Use” or “Trial with Payment” too early in the journey.

Market misalignment

Patterns where your verb scoring differs significantly from market norms, and your churn data supports that this matters. For instance, if every competitor allows free PDF exports but you gate this behind payment, your churned user interviews will likely mention this as a dealbreaker.

Create a prioritized list of friction points based on:

  • Impact (how many users are affected, based on your data?)
  • Confidence (do your user interviews confirm this is a problem?)
  • Effort (how hard is this to fix?)
  • Market expectation (is this friction standard, or are you an outlier?)

This becomes your retention optimization roadmap.

Why this framework works

This five-step audit framework delivers three specific outcomes that improve SaaS user retention:

Get a clear path to higher retention rates: No more guessing. You’ll have a prioritized list of friction points ranked by impact and effort. Fix the top three and you’ll see measurable improvement in activation, engagement, and conversion.

Make data-driven decisions: Create a culture of user-centered decisions rather than those based on the highest-paid person’s opinion, historical choices, or a gut feeling. When you combine quantitative data, qualitative research, market context, and systematic verb scoring, arguments become easy to settle.

Prevent feature flop: Validate changes before implementation. You’ll know which gates to remove, which features to add to your free tier, and which upgrade prompts to reposition, all before you waste valuable development resources.

Teams that run this audit consistently report two things: first, they’re surprised by what they find. Assumptions they’d held for months or years turn out to be wrong. Second, the fixes are often simpler than expected. Sometimes all it takes is moving an upgrade prompt, clarifying messaging, or ungating a single feature.

Running this audit takes time (and that’s the point)

Let’s be honest: this framework requires a meaningful investment. Between data analysis, user interviews, market research, and verb scoring, you’re looking at 40-60 hours of work.

That’s assuming you have the right tools, know how to set up proper analytics, can recruit and interview users effectively, and have experience interpreting qualitative data.

For many SaaS teams, that’s exactly the problem. You know you need to audit your free experience. You know churn is killing growth. But your product team is building features, your growth team is running acquisition campaigns, and nobody has the bandwidth or expertise to run a proper retention audit.

That’s where The Good’s Digital Experience Optimization Program™ comes in.

We’ve run this exact process dozens of times for SaaS companies between product-market fit and scale. Companies like yours with $1M-$30M ARR and pressure to accelerate growth while battling churn.

Our team conducts the full audit, including data review, user research, market analysis, and verb scoring, and delivers a prioritized roadmap of friction points with specific recommendations. Then we help you implement, test, and optimize the changes.

The result? Clients typically see measurable improvements in activation and retention within 60-90 days. More importantly, they build an optimization discipline that compounds over time.

Want to see where your free experience is bleeding users? Schedule an introductory call to discuss how we can help you reduce churn and improve SaaS user retention.

FREE RESOURCE


How Top AI Tools Turn Free Users Into Paying Customers


Opting In To Optimization

The post Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience appeared first on The Good.

]]>
Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times https://thegood.com/insights/fritz-oconnor/ Thu, 04 Sep 2025 20:09:59 +0000 https://thegood.com/?post_type=insights&p=110835 Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide. Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two […]

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide.

Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two decades of experience spanning manufacturing, sales, and marketing leadership, Fritz has developed a methodology for building high-performing organizations that deliver results consistently, even in challenging circumstances.

A marketing leader built for manufacturing

Fritz’s career journey reads like a masterclass in understanding customers across different industries. Starting in the printing and paper industry, he cut his teeth in structured sales training programs that taught him the fundamentals of professional sales and business operations.

“I’ve spent my entire career in sales and marketing roles. Almost exclusively in the manufacturing sector for companies that make stuff,” Fritz explains. This foundation in manufacturing would prove invaluable throughout his career, giving him deep insight into the complexity of bringing physical products to market.

His two-decade tenure at GE further refined his skills across diverse business environments. “We always used to say we can work in any industry, anywhere in the world, and still get paid by the same company,” he recalls. This experience working across plastics, appliances, and GE Corporate gave him a unique perspective on how great companies operate at scale.

But it was during his time at GE Corporate that Fritz discovered what would become his career-defining framework: differential value proposition (DVP). Working in a marketing consulting role with virtually every business in GE’s global portfolio, he helped launch this customer-centric approach to messaging and positioning throughout the organization.

This systematic approach to understanding and serving customers became foundational to Fritz’s ongoing success.

Implementing systems and frameworks that take teams from features to solutions

Originally coined by the founder of Valkre Solutions, Jerry Alderman, the DVP framework transforms how companies think about customer messaging and competitive positioning. Fritz became a master at implementing this methodology across diverse organizations.

“What are you offering? Be it a product or service that is better than the customer’s next best alternative,” Fritz explains. This might seem simple, but the implications are profound. Rather than competing on features or price, DVP focuses on solving customer problems in ways that competitors simply cannot match.

The challenge, as Fritz learned during his GE implementation, is that DVP represents a fundamental shift in thinking. "Every business, product, or service has a value proposition, but not every value proposition is differential. So many companies have the same value proposition. The white space is that differential part."

"It's about switching thinking from a feature to a benefit. For example, a blue appliance is not a differential value proposition. It's a feature."

Fritz teaches teams to make this shift by leading with problems and solutions.

"It's how it makes the consumer or customer's life better, how it solves that problem. You have to identify what the problem is. You have to articulate how you can fix that problem in a different way, better than anybody else."

This shift from features to solutions requires teams to understand their customers' actual problems, not just their stated needs.

For leaders, this translates directly into more effective product messaging, clearer value propositions, and ultimately, higher conversion rates.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Overcoming the "this is how we've always done it" challenge

One of Fritz's biggest career wins (and ongoing challenges) centers around implementing the Differential Value Proposition (DVP) methodology across organizations. The implementation at GE became both a success story and a learning experience in change management.

"As you can imagine, anytime you try and launch a new process in a company the size of GE, you can be met with resistance. Especially when you're coming out of corporate."

This resistance taught Fritz a crucial lesson about implementing change: "I don't view that as a challenge or a stumbling block, but as a fantastic and wonderful opportunity because when you flip those people, they become your biggest proponents."

His approach centers on listening first, then demonstrating value in the stakeholder's own language. "It's a listening journey. You've gotta understand what the challenges are that of the people with whom you're working, whether it's an external customer or an internal customer."

"Proactively listen and walk in the shoes of the people I'm working with. When I'm trying to introduce something as significant as DVP or other business tools."

This listening approach helps identify the real challenges and resistance points, making it possible to address them effectively.

The foundation: accountability, responsibility, and challenge

But having the right frameworks isn't enough. Fritz learned that execution depends on creating the right team culture. He is quick to credit his teams as the backbone of his successful projects, and one of the ways he supports them is with clear organizational principles.

"I have a few underlying business principles that I've gained along the way that are the foundational threads for me," Fritz explains. "One is, any team I work with or works for me, my job is to make them as successful as possible."

This people-first approach manifests through three guiding principles:

  • Accountability: Holding yourself and your team responsible for deliverables and outcomes
  • Responsibility: Taking ownership of significant business challenges
  • Challenge: Embracing difficult problems that create meaningful business impact

"The way I do that is through three guiding principles, which are accountability, responsibility, and challenge," Fritz notes. "I want to be entrusted with significant responsibility that is helping to solve a significant business challenge."

These principles translate into a simple but powerful operational mantra: deliver on time, complete with excellence.

"I know those all sound like buzzwords, but they're not meant to be. And we don't treat them as such. We treat them as very simple guiding principles to keep us focused."

Putting it all together at Ironman 4x4

When Fritz joined Ironman 4x4 America, he found the perfect opportunity to apply all of these frameworks.

Ironman 4x4 is a global company that sells off-road parts and accessories for 4x4 vehicles (lift kits, suspension parts, bumpers, etc.). They have been around since the 1950s, but were new to the United States, so Fritz had the opportunity to find new ways to market their complex "fitment" products, or parts that must work with specific vehicle makes and models. This complexity creates both technical and marketing challenges that Fritz's team had to solve systematically.

His sales background gave him an invaluable perspective on marketing effectiveness. "If you spend any time in sales, that means you're around customers, whether those are B2B or B2C customers. And you learn what's important to them."

This customer proximity taught him the critical principle of "show me, don't tell me." Rather than relying on feature lists or industry awards, effective marketing demonstrates value through customer experiences and outcomes.

"We always, in both sales and marketing, it's easy to get into the trap of just talking, talking, talking, describing stuff, talking about features and benefits. Talking about the industry's best. Nobody cares about your industry. They care about how your product or service is going to impact them."

The key to marketing complex products, Fritz knew, is understanding how customers think about their problems. Rather than leading with technical specifications, the focus should be on the customer's end goal and the emotional drivers behind their purchase decisions.

Fritz emphasizes the importance of demonstrating value rather than just describing it: "Really, visual storytelling, video storytelling, placing the customer in the scene so they understand your value. That ability comes from firsthand experience of seeing that happen in the sales arena."

A data-driven website replatforming

His POV shaped everything he was involved in at Ironman 4x4 America, from new product introduction processes to website optimization. Fritz implemented structured new product integration toll gates with clear deliverables and cross-functional accountability, ensuring every product launch was executed with precision across creative, digital, and channel marketing.

His customer-centered thinking and frameworks proved essential when his team tackled a complex website migration from an outdated platform to Shopify. The project was based on their understanding that a website change was necessary to better serve their audience and increase ecommerce sales.

Working with The Good on a DXO Program™, the Ironman 4x4 team executed the redesign and replatforming with data-driven methodology. Rather than relying on opinions about what the site should look like, they embraced rapid prototyping and continuous testing.

"Any decision made without data is just an opinion, right?" Fritz notes, referencing CEO Luke Schnacke's philosophy.

"We try to be very data-driven, which is why it was so important for us to work with The Good, to get that data and share it with the team managing the website replatforming so that they were making data-driven decisions on design and functionality."

They didn’t wait for a “perfect website” to figure out what customers wanted. They tested and got feedback throughout the entire process to make sure they were developing the right ideas.

"I realized we were never going to do it perfectly," Fritz recalls. The team was getting bogged down in opinions about checkout processes, product customizers, and overall site design. "We could end up using half our development budget on building something that doesn't perform."

"Ultimately, we agreed to launch and then test the heck out of it. We didn't want to overburden the development pipeline with projects that don't have a financial impact."

This represents a fundamental shift in thinking. They went from trying to build the perfect site to building a testable foundation for continuous improvement.

The beauty of working with The Good in this situation, Fritz explains, was "the rapid prototyping, the test and learn. We could very quickly get feedback and iterate and then test and learn again."

Multiplying results through partnership

Leveraging an external partnership accelerated progress beyond what internal resources could achieve alone and held the team accountable to the frameworks and goals of staying user-centered and data-driven.

"If you're not an expert, I would recommend doing a website project with a company like The Good. It wasn't a cost, it was an investment," Fritz emphasizes. "And I think that Ironman 4x4 is the beneficiary of the investment that they made with The Good as they migrated over to Shopify and learned about what customers would like."

The partnership enabled intentional, studied testing with proper dependencies and measurable results tracking.

"That whole test and learn methodology is done in a very structured, deliberate way. Making changes in a waterfall, with the proper dependencies articulated, and then tracking the measurable benefits of changes, and then tweaking accordingly from there."

This approach breeds confidence because it's entirely data-driven, removing guesswork from critical business decisions.

Lessons for marketing and sales leaders

For marketing and sales leaders looking to build similar operational excellence, Fritz's approach provides a roadmap: start with principles, understand your customers deeply, make decisions based on data, and never underestimate the power of strategic partnerships to unlock potential.

Start with principles, not tactics

Before implementing any marketing or optimization program, establish clear guiding principles. Fritz's framework of accountability, responsibility, and challenge provided a foundation that influenced every decision and created lasting organizational change.

Understand your customer's next best alternative

Move beyond feature-benefit messaging to understand what your customers would do if your solution didn't exist. This "next best alternative" thinking is the foundation of truly differential value propositions.

Convert resistance through understanding

When facing organizational resistance to change, focus on understanding stakeholder concerns rather than pushing solutions. Meet people where they are and demonstrate value in their language.

Embrace data-driven decision making

Resist the temptation to rely on opinions or best practices. Instead, create structured testing methodologies that let customer behavior guide optimization decisions.

Invest in external partnerships strategically

Recognize when external expertise can accelerate progress. The right partnerships provide capabilities and perspectives that internal teams may not possess, ultimately delivering better results faster.

Starting an optimization journey

Fritz's approach to building and scaling teams, including Ironman 4x4's US marketing operations, demonstrates how principled leadership, customer-centric thinking, and strategic partnerships can create sustainable competitive advantages.

"There's no obstacle too big that can't be overcome with data and optimization, right?" Fritz states emphatically. "The whole point of being data-driven and optimizing is to get time back and to become more efficient."

His advice for other leaders facing similar challenges?

"Get to yes. Figure out how to do it. Don't say, this is why I can't do it. Say this is how I'm going to do it. Here are things I need to do in order to do it. Then hold yourself accountable. Make it happen. Do it."

The secret, according to Fritz, lies in celebrating small wins that compound over time: "Little steps, I always like to say, celebrate the little wins. Go after the little wins because they compound on one another and then all of a sudden you're gonna look back and go, holy mackerel, I can't believe I am where I am."

The secret is consistency: "And it starts with data as your foundation and optimization as the accelerator."

For ecommerce leaders looking to build similar operational excellence, Fritz's framework provides a proven template: establish clear principles, understand customer problems deeply, make data-driven decisions, and never underestimate the power of strategic partnerships to accelerate growth.

Ready to optimize your ecommerce experience with data-driven methodology? Learn more about The Good's Digital Experience Optimization Program™ and discover how strategic partnerships can unlock your growth potential.


The Good helps ecommerce brands like Ironman 4x4 optimize their digital experiences through research-backed testing and strategic partnerships. Our team combines deep technical expertise with proven methodologies to deliver measurable results for growing brands.

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
How to Validate Website Design Changes: A Decision Framework https://thegood.com/insights/website-design-changes/ Thu, 28 Aug 2025 21:23:05 +0000 https://thegood.com/?post_type=insights&p=110805 How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it? The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have […]

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it?

The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have fallen into the trap of either testing everything (creating bottlenecks and slowing innovation) or testing nothing (making changes based purely on intuition).

There’s a better way. By understanding when different validation* methods are most appropriate, SaaS teams can make faster, more confident design decisions while maintaining the rigor needed for their most critical changes.

*Note: We know validation is a bad word in the research community because it implies “proving you’re right,” but we feel it’s easier to read and more quickly comprehensible for those not in research disciplines. We’re using “validation” in this article, but “evaluation” or “confirm or disconfirm” would be more acute in other settings.

The real cost of a bad experimentation strategy

When teams lack a clear strategy for validating decisions, they create what researcher Jared Spool calls “Experience Rot” – the gradual deterioration of user experience quality from moving too slowly or focusing solely on economic outcomes rather than user needs.

The costs manifest in several ways:

  • Opportunity cost: Market opportunities disappear while waiting for test results that may not even be necessary
  • Resource waste: Development time gets tied up in prolonged testing initiatives for low-risk changes
  • Analysis paralysis: Teams debate endlessly about what to test next instead of making decisions
  • Competitive disadvantage: Competitors gain ground while you’re stuck in lengthy validation cycles

The key is matching your experimentation method to the decision you’re making, rather than forcing every design change through the same validation process.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A framework for design validation decisions

The path to better validation starts with two fundamental questions about any proposed design change:

  1. Is this strategically important? Does this change significantly impact key business metrics or user experience?
  2. What’s the potential risk? What happens if this change performs worse than expected?

Using these dimensions, you can map any design change into one of four validation approaches:

A decision making framework for validating decisions regarding website design changes.

High Strategic Importance + Low Risk = Just ship it

If you can’t explain meaningful downsides to a design change but know it’s strategically important, you probably don’t need to validate it at all. These are your quick wins.

Examples for SaaS teams:

  • Adding customer testimonials to your pricing page
  • Improving mobile responsiveness
  • Fixing broken links or outdated screenshots
  • Adding clearer error messages in your product

Why this works: The upside is clear, the downside is minimal, and the time spent testing could be better invested elsewhere.

Low Strategic Importance = Deprioritize

Not every design change needs validation because not every change is worth making. Some modifications simply aren’t worth the time and resources, regardless of the validation method you might use.

Examples of low-impact changes:

  • Minor color adjustments to non-critical elements
  • Changing footer layouts
  • Tweaking secondary page designs that get little traffic
  • Adjusting spacing that doesn’t affect usability

When to reconsider: If data later shows these areas are creating friction, they can move up in priority.

High Strategic Importance + High Risk = Validation territory

This is where both A/B testing and rapid testing methods become valuable. The critical next decision becomes: can you reach statistical significance within an acceptable timeframe, and are you technically capable of running the experiment?

When to use A/B testing vs rapid testing

This decision tree helps determine if your website design changes should be tested or if another approach should be used.

When to use A/B testing for design changes

A/B testing remains your best option for design changes when:

  • You have sufficient traffic on the experience: Generally, you need 1,000+ visitors per week to the page being tested
  • The change is reversible: You can easily switch back if the results are negative
  • You need statistical confidence: Stakes are high enough to justify the time investment
  • Technical capability exists: Your team can implement and track the test properly

Examples of SaaS use cases for A/B testing:

  • Complete homepage redesigns
  • Pricing page layouts and messaging
  • Sign-up flow modifications
  • Core product onboarding changes
  • High-traffic landing page variations

When to use rapid testing for design changes

When A/B testing isn’t right due to traffic constraints, technical limitations, or time pressures, rapid testing provides a faster path to validation.

Rapid testing methods work particularly well for SaaS design validation because they can:

  • Validate concepts before development: Test wireframes and mockups before building
  • Narrow down options: Compare multiple design variations quickly
  • Identify usability issues: Spot problems before they reach real users
  • Provide qualitative insights: Understand the “why” behind user preferences

Examples of SaaS use cases for rapid testing:

  • New feature naming and messaging
  • Dashboard navigation restructuring
  • Enterprise sales page designs (low traffic)
  • Value proposition clarity testing
  • Multi-option comparisons (6-8 variations)

The natural next question might be “which rapid testing method should I use?” Here is another decision tree framework to help answer that.

This framework is a guide to determining which rapid testing method is best suited for your website design changes.

Incorporate your experimentation strategy into your design process

With a decision-making strategy for how and what to test, you’ll need to incorporate the strategy into your design process. The most successful SaaS teams don’t treat validation as an afterthought. They build it into their process from the beginning:

  • During ideation: Use rapid testing to validate concepts and narrow options before detailed design work
  • During design: Test wireframes and mockups to identify issues before development
  • Before launch: Use A/B testing for high-stakes changes, rapid testing for others
  • After launch: Continue testing iterations based on user feedback and performance data

The compounding benefits of a sound experimentation strategy

The goal isn’t to replace A/B testing with rapid methods or vice versa. Both have their place in a mature experimentation strategy. The key is understanding when each approach provides the most value for your specific situation and constraints.

Teams that master this balanced approach to validation see remarkable improvement, including:

  • 50% better A/B test win rates (because rapid testing helps identify winning concepts)
  • Faster time-to-market for design improvements
  • More confident decision-making across the organization
  • Better team morale from seeing results from their work more quickly

Perhaps most importantly, they avoid the extremes of either testing nothing (high risk) or testing everything (slow progress).

For SaaS teams serious about optimization, the question isn’t whether to validate design changes; it’s whether you’re using the right validation method for each decision.

Start by auditing your current design change process. Are you testing changes that should be implemented immediately? Are you implementing changes that should be tested? By aligning your validation approach with the strategic importance and risk level of each change, you can move faster without sacrificing confidence in your decisions.

And if you aren’t sure how to get started, our team can help.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
How Does Experimentation Support Product-Led Growth? https://thegood.com/insights/experimentation-product-led-growth/ Mon, 25 Aug 2025 19:00:23 +0000 https://thegood.com/?post_type=insights&p=110784 The product-led growth (PLG) playbook is no longer a secret. Free trials, frictionless onboarding, viral mechanics. Many SaaS companies are following the same script. Yet despite implementing all the product-led growth best practices, most companies leveraging these strategies hit a growth plateau, watching competitors with seemingly similar products pull ahead. Here’s what they’re missing: the […]

The post How Does Experimentation Support Product-Led Growth? appeared first on The Good.

]]>
The product-led growth (PLG) playbook is no longer a secret. Free trials, frictionless onboarding, viral mechanics. Many SaaS companies are following the same script. Yet despite implementing all the product-led growth best practices, most companies leveraging these strategies hit a growth plateau, watching competitors with seemingly similar products pull ahead.

Here’s what they’re missing: the most successful product-led companies don’t just follow the playbook. They rewrite it based on what their actual users reveal through experimentation.

While everyone else copies best practices, companies that layer experimentation into their PLG strategy are discovering the specific insights that accelerate their growth. In a world where everyone has access to the same tactics, the ability to learn about your own users (and do it faster) becomes a moat.

Companies like Booking.com, Netflix, and Amazon didn’t achieve their dominance by following conventional wisdom. They made experimentation central to their success, running thousands of experiments annually to optimize their user experience. And you don’t need their resources to adopt their approach.

What is product-led growth?

Product-led growth is a strategy that emphasizes the product itself as the primary driver of customer acquisition, conversion, and retention.

Traditionally, companies have relied on sales and marketing tactics to create leads and drive customer adoption. Ads and websites had to do most of the selling, and the onus was on the potential user to read ads, navigate websites, choose between feature matrices, and, at times, go through a complicated sales process (on or off-site).

In a product-led growth model, companies remove as many obstacles as possible to acquiring free registered users. This approach often involves offering a free or freemium version of the product, allowing users to experience its value before committing to a paid subscription.

An infographic comparison of how experimentation product led growth differs from traditional sales models.

If the experience is good enough to keep them using it, and the paid features are valuable enough, then the hope is that users will ultimately convert into paying customers. In this way, the product serves as the main vehicle for customer acquisition and expansion.

Just like test driving a car, they let you test drive their product and discover the value on your own, before making a purchase decision.

Companies that successfully implement a product-led growth strategy often benefit from increased customer loyalty, higher conversion rates, lower customer acquisition costs, and sustainable long-term growth.

The shift from “launch and learn” to “test and learn”

Plenty of companies, between product-market fit and scale, run their growth strategies on a “launch and learn” philosophy. They build features based on hunches, ship them to users, then analyze the results afterward. This approach can work, but when operating on a product-led growth model, product decisions carry outsized impact. The product experience influences pretty much every KPI from acquisition to retention.

When you launch first and learn later, you’re essentially gambling with your users’ experience. Every poorly conceived feature, every friction point, every missed opportunity represents lost revenue and potentially churned customers. More importantly, it represents wasted development resources that could have been deployed more strategically.

This is where experimentation comes in. Instead of “launch and learn,” companies can shift to “test and learn.” This means experimentation and analysis of results happen pre-launch, not after. Changes are validated with real users before full implementation, minimizing risk and maximizing ROI.

Experimentation before implementation gives you an understanding of real customer behavior and clearly indicates how you can repeat results by uncovering the why behind those behaviors.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

How experimentation amplifies PLG success

Experimentation is only helpful to a product-led growth strategy when it is done right. So what are some of the ways to implement that will amplify PLG success?

1. Systematic optimization across the customer journey

The most effective approach to PLG experimentation uses frameworks like ROPES (Registration, Onboarding, Product, Evangelize, Save) to systematically optimize each stage of the customer experience. Rather than randomly testing features, successful companies identify specific levers within each stage and experiment systematically.

For example:

  • Registration phase: Testing form length, social proof elements, and value propositions
  • Onboarding phase: Experimenting with tutorial formats, progress indicators, and time-to-value optimization
  • Product phase: Testing feature discoverability, UI changes, and user flow improvements
  • Evangelize phase: Optimizing sharing mechanisms, referral programs, and viral loops
  • Save phase: Testing retention tactics, upgrade prompts, and churn prevention strategies

This systematic approach ensures that experimentation efforts are strategic rather than scattered, creating compounding improvements across the entire user journey.

2. Accelerated learning through parallel testing

Traditional A/B testing approaches test one hypothesis at a time, which can drastically slow your learning velocity. Advanced PLG companies run multiple experiments simultaneously across different parts of their product experience, dramatically increasing the rate at which they gather insights.

The key to successful parallel testing is ensuring experiments don’t interfere with each other. As Natalie Thomas, our Director of UX and Strategy, explains: “It’s important to look at behavior goals to assess why your metrics improved after a series of tests. So if you’re running too many similar tests at once, it will be difficult to pinpoint and assess exactly which test led to the positive result.”

Successful parallel testing requires:

  • Creating testing roadmaps that cover independent product areas
  • Building small, cross-functional teams assigned to each area
  • Establishing clear metrics and success criteria for each test
  • Implementing proper statistical controls to avoid interference

3. Rapid experimentation for faster innovation

Speed matters in PLG. Market opportunities disappear quickly, and user expectations evolve constantly.

So, one of the main objections to implementing an experimentation strategy is that testing cycles often take weeks or months to complete. But high-performing PLG companies have found ways to cut this time in half without losing statistical rigor. Key strategies include:

Supplementing A/B Tests with Rapid Testing: Not every hypothesis requires a full A/B test. Qualitative research, user interviews, and rapid prototyping can validate concepts quickly before investing in development.

Modular Testing Approaches: Instead of starting from scratch each time, successful teams create reusable components like design templates, testing frameworks, and analysis processes to reduce setup time.

AI-Powered Research: Using artificial intelligence as a research assistant to speed up data collection, user recruitment, and insight generation.

Prioritization Frameworks: Implementing systematic prioritization (like the ADVIS’R framework) to ensure high-impact experiments get fast-tracked through the process.

4. Data-driven feature development

Experimentation helps PLG companies avoid the biggest roadmap mistake: prioritizing low-impact features. Instead of building what seems logical, experimentation reveals what actually drives user behavior and business metrics.

This is particularly important as you scale beyond basic PLG practices. When you’re competing with other product-led companies, the quality of your feature decisions becomes a key differentiator. Companies that systematically test and validate features before full development consistently outperform those that rely on intuition.

The most successful approach combines quantitative testing with qualitative insights. This means not just measuring what users do, but understanding why they do it. This deeper understanding enables teams to build features that truly resonate with users rather than features that just check boxes.

5. Building an experimentation-first culture

An outcome of adding experimentation to a product-led growth strategy is that it will help build the practice into your company culture. To do that, you can follow a few key steps.

Start with infrastructure

Before you can effectively use experimentation to support PLG, you need the right infrastructure. This includes:

  • Testing platforms that can handle both simple A/B tests and complex multivariate experiments
  • Analytics systems that provide real-time insights into user behavior
  • Data pipelines that connect user actions to business outcomes
  • Collaboration tools that enable cross-functional teams to work together effectively

Establish clear processes

Successful experimentation requires discipline. Teams need clear processes for:

  • Hypothesis formation and validation
  • Test design and statistical planning
  • Resource allocation and project management
  • Results analysis and decision-making
  • Knowledge sharing and organizational learning

Foster cross-functional collaboration

The most impactful experiments often come from unexpected sources. Engineers closest to the code understand technical constraints and opportunities. Designers see user experience friction points. Customer success teams hear directly from users about pain points.

Creating space for these diverse perspectives to contribute to experimentation efforts often leads to breakthrough insights that no single team would discover independently.

The compound effect of systematic experimentation

What makes experimentation so powerful for PLG companies is its compound effect. Each successful experiment doesn’t just improve one metric. It teaches you something about your users that informs future experiments.

Over time, this creates an accelerating cycle of improvement. Companies that have been systematically experimenting for years possess a deep, nuanced understanding of their users that newcomers can’t easily replicate. This understanding becomes a sustainable competitive advantage.

Moreover, experimentation capabilities themselves improve with practice. Teams get faster at designing tests, more sophisticated in their analysis, and better at translating insights into action. The infrastructure and culture that support experimentation become organizational assets that compound over time.

Experimentation as your PLG multiplier

Product-led growth without experimentation is like driving with your eyes closed. You might reach your destination, but probably not efficiently, and certainly not safely. Experimentation transforms PLG from a collection of best practices into a systematic approach to user-centered product development.

The companies that win in today’s competitive SaaS landscape aren’t just those with the best products; they’re those that can consistently improve their products based on real user insights. They’ve made experimentation not just a tactic, but a core organizational capability.

Ready to transform your PLG strategy with systematic experimentation? The Good specializes in helping product-led companies build experimentation capabilities that drive sustainable growth.

Our Digital Experience Optimization Program™ combines strategic frameworks like ROPES with hands-on experimentation support to help you uncover the specific insights your business needs to scale. Let’s explore how experimentation can accelerate your growth →

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post How Does Experimentation Support Product-Led Growth? appeared first on The Good.

]]>
5 SaaS Growth Strategies That Work (Based On Analysis Of 15 Top AI Tools) https://thegood.com/insights/saas-growth-strategies/ Wed, 13 Aug 2025 20:42:36 +0000 https://thegood.com/?post_type=insights&p=110756 The AI boom isn’t just about better technology; it’s about smarter growth strategies. While everyone’s talking about features and capabilities, there is another, equally compelling story that lies in how these tools convert free users into paying customers at unprecedented rates. We dove deep into the user experiences of 15 top AI tools, documenting over […]

The post 5 SaaS Growth Strategies That Work (Based On Analysis Of 15 Top AI Tools) appeared first on The Good.

]]>
The AI boom isn’t just about better technology; it’s about smarter growth strategies. While everyone’s talking about features and capabilities, there is another, equally compelling story that lies in how these tools convert free users into paying customers at unprecedented rates.

We dove deep into the user experiences of 15 top AI tools, documenting over 100 monetization touchpoints, upgrade pathways, and conversion tactics. What we found were five distinct patterns that drive revenue for these leaders.

These strategies aren’t just for AI. They’re blueprints that any SaaS tool can adapt to accelerate its own growth. Here’s what we learned.

The data behind the patterns

Our analysis covered tools spanning text generation (ChatGPT, Claude), search (Perplexity), design (Ideogram, Leonardo.AI), video creation (Runway), and productivity (Grammarly, QuillBot). Each tool was examined across four critical areas:

  • Monetization elements: Upgrade CTAs, limit notifications, premium feature gates, and more
  • Monetization pathways: The specific user journeys from free to paid
  • Pricing and payment screens: Where users actually convert when they decide to upgrade
  • Missed opportunities: Places where tools could be driving more conversions
Monetization doc gif

What emerged were five clear patterns that high-converting tools use consistently.

Pattern #1: The progressive squeeze

The strategy: Start with subtle hints, then gradually increase conversion prompts as users become more invested.

Who’s doing it: Claude, ChatGPT, and Perplexity have mastered this approach.

How it works: These tools begin with gentle upgrade suggestions embedded in the interface. A small CTA in the sidebar, a mention of plan limits in account settings. As users engage more, the messaging becomes increasingly direct.

Claude exemplifies this perfectly. New users see a subtle “Free plan” indicator and a small upgrade CTA. After several conversations, users get friendly notifications about approaching limits. Only when limits are actually hit does Claude present the strong upgrade push with clear urgency messaging.

A screenshot from Claude as an example of effective SaaS growth strategies.

ChatGPT follows a similar pattern but with more touchpoints. Multiple upgrade opportunities appear once logged in, but the real conversion push happens when users try to upload files or access advanced features.

A screenshot from ChatGPT as an example of effective SaaS growth strategies.

Why it converts: Users invest time and mental energy before hitting any hard walls. By the time they reach limits, they’re already committed to the tool and see clear value in upgrading rather than switching to alternatives.

The missed opportunity: Many tools go straight to hard limits without the progressive buildup, losing users who might have converted with a gentler approach.

FREE RESOURCE


How Top AI Tools Turn Free Users Into Paying Customers


Opting In To Optimization

Pattern #2: The feature tease

The strategy: Show users exactly what they’re missing by displaying premium features prominently, then gating access.

Who’s doing it: Ideogram, Grammarly, and Leonardo.AI excel at this approach.

How it works: These tools don’t hide their premium features. Instead, they showcase them prominently with visual cues like lock icons, blurred previews, or “Pro” badges. Users can see the feature, understand its value, and often interact with locked elements that trigger upgrade modals.

Ideogram shows locked features upfront on the dashboard, displays private galleries as gated sections, and lets users click through to see upgrade benefits. When users generate images, editing options appear with clear visual indicators of which features require upgrading.

A screenshot from Ideogram as an example of effective SaaS growth strategies.

Grammarly shows blurred premium suggestions alongside free ones, lets users see statistics with tone analysis grayed out, and provides partial feature previews that create curiosity about the full experience.

A screenshot from Grammly as an example of effective SaaS growth strategies.

Why it converts: Curiosity combined with FOMO creates powerful motivation. When users can see exactly what they’re missing and how it would solve their problems, the upgrade decision becomes much easier.

Implementation tip: The key is showing enough value to create desire while maintaining a clear visual hierarchy between free and premium features.

Pattern #3: The moment of need

The strategy: Present upgrade options precisely when users are most invested and would benefit most from premium features.

Who’s doing it: Runway, QuillBot, and Character.AI time their conversion prompts perfectly.

How it works: Instead of generic upgrade CTAs, these tools interrupt workflows at strategic moments when users are actively trying to accomplish something and would most benefit from premium features.

Runway waits until users want to export in 4K resolution or remove watermarks, both of which are moments when they’re already committed to using the generated content.

A screenshot from Runway as an example of effective SaaS growth strategies.

QuillBot triggers upgrade prompts when users hit word limits mid-task, not during idle browsing.

a screenshot from Quillbot showing an example of saas growth strategies.

Why it converts: Perfect timing equals the highest conversion rates. When users are already invested in a task and premium features would immediately solve their problem, the upgrade becomes a logical next step rather than an interruption.

The psychology: This taps into the completion bias. Once users start a task, they’re motivated to finish it, making them more likely to pay to remove obstacles.

Pattern #4: The transparent countdown

The strategy: Create urgency and build trust by clearly showing usage limits, remaining credits, and reset timers.

Who’s doing it: Perplexity, Grammarly, and Copy.AI have perfected transparent limit communication.

How it works: Instead of surprising users with sudden limits, these tools constantly communicate remaining usage through progress bars, countdown timers, and clear messaging about when limits reset.

Perplexity shows “2 queries remaining today” with each search, giving users clear visibility into their usage without anxiety.

A screenshot from Perplexity as an example of effective SaaS growth strategies.

Grammarly displays credit counts and refill timers for AI features, so users can plan their usage accordingly.

A screenshot from Grammarly as an example of effective SaaS growth strategies.

Copy.AI uses a prominent word count progress bar that updates in real-time, showing exactly how much of their monthly limit has been used.

A screenshot from copy.ai an example of effective SaaS growth strategies.

Why it converts: Transparency builds trust while creating healthy urgency. Users appreciate knowing where they stand and can make informed decisions about when to upgrade rather than feeling tricked by hidden limits.

The trust factor: When users trust that limits are fair and clearly communicated, they’re more likely to see upgrading as a reasonable business transaction rather than being forced into paying.

Pattern #5: The omnipresent nudge

The strategy: Place multiple upgrade touchpoints throughout the interface without being intrusive.

Who’s doing it: ChatGPT, QuillBot, and Ideogram have mastered multi-touchpoint conversion.

How it works: These tools strategically place upgrade opportunities at different points in the user journey, including header CTAs, sidebar reminders, settings page options, and feature-specific prompts. The key is making each touchpoint feel contextual rather than repetitive.

ChatGPT places upgrade CTAs in the dropdown menu, file upload tooltips, model selection interfaces, and account settings. Each serves a different user intent and provides value beyond just asking for payment.

A screenshot from ChatGPT is an example of effective SaaS growth strategies.

QuillBot integrates upgrade opportunities into the workflow, for example, in premium mode selectors, feature benefit explanations, and contextual prompts that feel helpful rather than pushy.

Quillbot upgrade integrations are a good example of effective saas growth strategies.

Why it converts: Repetition without annoyance increases recall and provides multiple chances to convert users at different readiness levels. Some users need to see upgrade options multiple times before they’re ready to act.

The balance: The key is ensuring each touchpoint provides value or information, rather than simply asking for money repeatedly.

The standout performers

While all 15 tools showed growth-focused design, three stood out for their sophisticated monetization strategies:

Claude excels at the Progressive Squeeze, building user investment before presenting upgrade opportunities. Their limit messaging feels helpful rather than restrictive, and the upgrade pathway is seamless.

Ideogram masters the Feature Tease, showcasing premium capabilities so effectively that users understand the upgrade value before reaching any limits. Their visual hierarchy makes premium features aspirational rather than frustrating.

Perplexity nails the Transparent Countdown, creating urgency without anxiety through clear limit communication and value-focused messaging.

Common missed opportunities

Our analysis revealed several patterns where even successful tools leave money on the table:

  • Timing failures: Many tools show upgrade prompts during onboarding when users haven’t yet experienced value, rather than waiting for engagement.
  • Value communication gaps: Some tools gate features without clearly explaining the benefits, leading to confusion rather than desire.
  • Conversion pathway friction: Several tools send users to generic pricing pages rather than contextual upgrade flows that maintain momentum.
  • Limit surprises: Tools that suddenly cut off functionality without warning create frustration rather than conversion motivation.

Applying these patterns to your SaaS growth strategies

These AI growth strategies aren’t limited to AI tools. The underlying principles work for any SaaS looking to improve free-to-paid conversion:

Start with your user journey mapping

Identify key moments where users experience value and where they encounter limitations. These are your conversion opportunity points.

Audit your current upgrade messaging

Are you using the Progressive Squeeze, or do you jump straight to hard limits? Are you showing users what they’re missing with Feature Teasing?

Review your limit of communication

Do users understand their usage limits, and when they reset? Transparent Countdown reduces churn and builds trust.

Optimize your touchpoint strategy

Map where upgrade CTAs appear in your interface and ensure each serves a specific user need rather than just asking for payment.

Test your conversion timing

Are you presenting upgrade options when users are most invested (Moment of Need) or just when it’s convenient for your UI?

What does this mean for your growth strategy?

AI tools are teaching us that successful monetization isn’t always about restricting features; it can be about showcasing value, building trust, and timing conversion opportunities perfectly. The tools growing fastest aren’t necessarily those with the best AI models, but those with the smartest user experience design.

These patterns work because they align business needs with user psychology. Instead of seeing limits as barriers, users experience them as natural progression points toward greater value.

The AI boom provides a unique laboratory for studying growth tactics at scale. These tools process millions of users and can iterate rapidly, revealing what actually drives conversions versus what we think should work.

As AI capabilities become more commoditized, user experience (including monetization design) becomes the key differentiator. The tools implementing these patterns now are building sustainable competitive advantages that will persist even as the underlying technology evolves.

Taking action on these insights

The most successful SaaS companies will adapt these AI growth strategies to their own products before their competitors catch on. Start by analyzing your current monetization approach against these five patterns:

  1. Map your user journey to identify Progressive Squeeze opportunities
  2. Audit your feature visibility to implement Feature Teasing where appropriate
  3. Review your limit of communication to adopt Transparent Countdown principles
  4. Time your conversion prompts to leverage the Moment of Need psychology
  5. Optimize your touchpoint strategy using Omnipresent Nudge best practices

The data from these 15 AI tools provides a roadmap, but implementation requires careful testing and optimization for your specific user base and value proposition.

Ready to apply these AI growth strategies to accelerate your SaaS growth? The Good specializes in analyzing user experiences and implementing conversion optimization strategies that turn insights into revenue. Our team has helped dozens of SaaS companies optimize their monetization flows using data-driven approaches just like this analysis.

Get your personalized monetization strategy audit. We’ll analyze your current user experience against these proven patterns and create a prioritized optimization roadmap tailored to your product and audience. Schedule a consultation with our team to discover how these AI growth strategies can accelerate your revenue growth.

FREE RESOURCE


How Top AI Tools Turn Free Users Into Paying Customers


Opting In To Optimization

The post 5 SaaS Growth Strategies That Work (Based On Analysis Of 15 Top AI Tools) appeared first on The Good.

]]>