Book your demo
Login
Home/Blog/When To Iterate On A Website Experiment, And When To Move On

When To Iterate On A Website Experiment, And When To Move On

Stewart Hillhouse
Posted by Stewart Hillhouse|Published on July 06, 2023
Study proven playbooks from other B2B marketers

See how companies like Qualtrics and Snowflake are driving more revenue from their target accounts.

Browse real customer examples
1:1 ABM personalization made easy

See how marketers at Snowflake, Amplitude, and 6sense use Mutiny to engage target accounts at scale

See how it works

Conversion lift comes from experimenting to find what works

Personalization can lead to huge lifts in conversion rate, but sometimes you may need an iteration or two before you get that big win. That's why following a regular cadence of launching new experiments is critical in the conversion rate optimization process.

Every test, even those that result in negative lift, will move you directionally towards creating compounding value and a better experience for your customers.

But how do you know when an experiment should be iterated on, or when it should be abandoned completely? In this post we'll share key learnings that have come from creating over 500M personalized experiences for inbound website visitors.

Here's a handy dandy diagram to get us started with. As we go through the different cases, I'll highlight the part of the diagram worth paying attention to in purple.

comparison guide of iterate.

Below are the 3 primary signs to look out for when deciding whether you should iterate on your experience.

Case 1: Negative lift + medium or high traffic volume

TLDR: Iterate on your experience if your conversion rate lift is consistently negative, and your experience has at least 100 visitors in each variation.

Experimentation matrix

When to iterate

You do not need to wait for your experience to reach a statistically significant result on a negative lift before deciding to iterate. If you see a consistent negative trend with enough visitor traffic, you should consider iterating on your experience.

Make sure you have enough volume in the experience before deciding to iterate. When visitor traffic is low, conversion rates are unstable and you can easily see negative trends reverse as visitor traffic increases.

For example, if you have 2 conversions from 10 visitors in the control and 1 conversion from 10 visitors in your personalization, you will see a -50% lift. As a benchmark, try to wait for at least 100 visitors in each experience before making conclusions.

consider this an underperforming experience
Example: negative lift, time to iterate

Consider the difference in total conversions when deciding if your experience is really underperforming. Only take this approach if you have a similar number of visitors in each variation. Look for a difference of at least 5 conversions between the personalized experience and the control before making any conclusions.

Another gut check on a negative lift is consistency of performance. If you checked on your result yesterday and you had a positive lift and today you see a negative lift, no need to iterate just yet. Give your experience some time to normalize before making conclusions.

How to iterate

Hypothesize why the experience underperformed.

  • What did you change that is performing worse?

  • Did your message lose context?

  • Is your content clear and concise?

  • Are you using less effective company logos or social proof?

  • Did you remove a CTA or change the text?

Revert the hypothesized issue or update to a more strategic version.

Relaunch the experience and reset results. When you make an update to a live experience in Mutiny and hit “Launch”, Mutiny will ask if you want to keep results or reset results. Click ‘reset results’ to create a new revision and track your changes separately. Mutiny will store previous revisions for review and historical context.

Example: track revisions from each iteration

Optional but recommendedwrite down insights by segment. Keep track of what works and what doesn’t to grow your program even more.

Case 2: Flat result + high traffic volume

TLDR: Iterate on your experience if your conversion rate lift is flat and you have at least 300 visitors in each variation.

Experimentation matrix case 2

When to iterate

Unlike A/B testing, personalization should generate a large lift (normally at least 20%). If your lift is smaller (within -20% to 20%) and the experience has not reached statistical significance, you should iterate on the experience to create a larger impact.

Make sure your experience has enough visitors to determine the result is flat. Look for at least 300 visitors in each experience before making this conclusion.

Example: flat result

How to iterate

Hypothesize why the experience is flat and make a few changes. If you fall into the flat performance category, most likely your personalizations were too subtle.

  • How can you go bolder?

  • Is your personalized content below the fold?

  • Is it only slightly different than your control?

  • If your personalizations are not subtle, are you perhaps counter balancing a really positive change with a really negative change?

Revert the hypothesized issue or update to a more strategic version. Relaunch the experience and reset results. As we mentioned above, be sure to click 'reset results' so that you can track changes separately.

Optional but recommended: write down insights by segment. Keep track of strategies that didn’t drive a big impact for the segment so you can continue to grow your program.

Case 3: Low visitor traffic

TLDR: Expand your segment size or promote the experience if you get less than 100 visitors per variation per month.

Experimentation decision matrix case 3

When to iterate

If your experience has been running for more than 1 month and has fewer than 100 visitors in each experience, it will be difficult for you to measure performance differences. In these cases you should try to expand your segment size — we recommend doing this whenever possible anyway.

If your segment size is still small, you should promote the experience and watch the conversion rate. In this case, it’s better to just show the personalized experience to all traffic to avoid drawing false conclusions.

Example: low traffic

How to iterate

Try to expand segment size.

Use “or” conditions in the segment creator to build a larger version of your segment. For instance, if your segment targets a certain industry, add various definitions from IP data (like “Industry”, “Industry Group”, “Sub-industry” and “Company Tags”) + UTM attributes + Behavioral Audiences (“Vertical”) + any relevant first party attributes.

As an example, if your experience targets Financials, your expanded segment definition might look like this:

expanded segment definition

If the estimated segment size still looks small (like in the example above), you should opt to promote your experience vs testing in experiment mode.

If you are promoting your experience instead of experimenting, keep an eye on your conversion rate to get a sense for impact. It’s a little more difficult to measure, but you should have a sense for what is a “good” conversion rate for your website.

If you think you can beat it, try something else! Compare before / after with Mutiny’s revision tracking.

Summary

Here’s a handy comparison guide you can use when determining when to iterate.

comparison guide of iterate.
  1. Negative lift + medium or high traffic volume: Iterate by changing your messaging.

  2. Flat result + high traffic volume: Iterate by being bolder with your experiment.

  3. Low visitor traffic: Expand your segmentation to reach a larger audience.

Share this Post

Learn to drive more pipeline

Curated resources to accelerate your career

Learn how top B2B marketers are driving pipeline and revenue.