Visitor segmentation for better CRO

Matt Scaysbrook
Director of Optimisation

Visitor segmentation for better CRO

Part one of our Lessons in CRO Mini-Series (1 of 10)

Whether an experiment wins or loses, you gain insights. There's always something to learn. You can make your business a lot of money implementing changes from a win. You can also save a lot of money that you would have lost if you hadn't tested first.

We ran 422 CRO experiments in 2020. That’s a lot of lessons learned! We’ve picked out some of our favourites and created a 10-part mini-series to share them with you. We hope they inspire your next experiments.

Here's lesson one:

Lesson #1: There is no right experience without understanding the right visitor for that experience

Results

  • 49-133% increase in revenue per visitor across segments
  • 2-15% increase in sales conversion across segments
  • £4.08m in annualised additional revenue

The problem

This client had a very large and very diverse visitor base. This meant their visitors’ needs were substantially different. It also meant that their commercial success was tied to meeting those needs. Failing to do so was leaving a lot of money on the table.

What we did

We planned a series of sequential tests to identify and refine their segmentation. The results of each test then enabled us to iterate for the next one. Each test was split by device, plus extra increasingly-specific segmentation as we progressed.

Phase I

We began by removing order process upsells for Paid Search visitors and those not currently logged in. This produced mixed results. Sales conversion improved but some segments saw a reduction in revenue per visitor (RPV). This indicated that we had negatively affected the average order value (AOV). This clearly showed that there was hidden value in deeper segmentation.

Phase II

Phase II then tested the same upsell removal, this time split by existing and new customers. Again, we saw mixed results with sales conversion up in all segments but with some negatives in AOV. This again suggested there was greater value to be found in more specific experiences.

Phase III

The third and final phase introduced basket-size segmentation based on AOV. There were three factors in each segment: device type, customer type and basket size. The latter was based on the AOV of the device and customer type segment, not just on the overall site averages. The removal of the upsells was then tailored to whether the visitor’s basket size was above or below their segment's AOV. This enabled us to create the most commercially-beneficial experience for each segment.

The reasons it worked

Revenue is a bell-curve

Perfect conversion means giving things away for free. Perfect AOV is infinite and therefore impossible. The best revenue result comes from balancing these two opposing measures.

Hidden negatives & hidden positives = hidden value

Within every winning test, there are segments of visitors who react negatively. And within every losing test, there are segments of visitors who react positively. Identifying those segments can be hugely valuable.

Be specific

Showing upsells or not based on site AOV would likely be effective. Showing upsells or not based on segment-specific AOV will likely be more effective. Be specific about who your test is targeting for the strongest results.

Your next experiment?

Make your order process dynamic to a visitor’s situation.

Build tests that seek to identify high & low-performing segments.

Maximising revenue is always a balance. Let above-AOV visitors sail through the process and focus upsell efforts on those below-AOV basket value.

Thanks for reading, and don’t forget to follow us for the rest of the blog mini-series (9 parts to go!)

If you have any questions about experimentation, Matt our Director of Optimisation offers a free no-pressure 30 minute CRO consultation. He's always happy to answer any weird and wonderful CRO questions you might have!

Subscribe for CRO tips & advice

Landing in your inbox once a month