Test big. But expect small successes. Verify all gut instincts with data. If you are asked to do a “dumb” test, do it. You might be wrong (or right). Either way, the company wins.


The post Episode #13: putting data behind your gut instinct’s voice. appeared first on Actionable Insights.

TL;DR

Test big. But expect small successes.Verify all gut instincts with data.If you are asked to do a “dumb” test, do it. You might be wrong (or right). Either way, the company wins.

Meet Guido

Guido is a cognitive psychologist (please read the wiki article—you might want to become one yourself, it’s pretty neat!).He has spent years learning how people work with environments (ecom is an environment) and how environments affect people. With ecommerce, we need to ensure that our buyers know whey they are getting and they don’t get lost in the purchase process—this is a valuable set of skills.He has spent years applying this research to many merchant experiences.And he comes here today to share a few of his findings with you.

“My gut says that we should…”

I’m sure we’ve all worked with “that CEO” who runs his business according to his gut. Gut instinct isn’t all bad. But it’s demoralizing when this is the only means of decision making.

Dilbert on March 30, 2014

We all know that gut instinct is just that—it’s rooted somewhere deep in our being but often has little basis in reality (sometime it does, though!). When making a big decision, like a job change, this might be our only source of guidance. But, in online selling, we regularly have more sources of data.

Gut instinct can provide the hypothesis for A/B tests. By just making decisions on this “source of truth from who-knows-where”, we can fall majorly short. We then see the data that proves our instinct wrong, but then have to massage it to ensure that it supports our beliefs.

Guido makes a case that this idea you have should be the foundation for better understanding your customer’s behavior. As you are able to accommodate, the goal is that you become more profitable.

He also suggests that A/B testing should not be simple color or textual changes. Why? There is nothing learned or gained that isn’t already known. A red “add to cart” button draws more attention, but if a customer is already looking to purchase, they will find the button. That goes without saying that the add to cart button must be readily visible (if it’s hidden or hard to find, then please fix that).

Side note:

The smallest, hardest to find checkout button.

A cart button that is smaller than the Customer Account login (in the upper right corner) should be immediately fixed. No A/B test is needed here. I am working to preserve the privacy of this company, otherwise, I’d show the entire header so you can get a perspective of just how small this is.

A/B test research

Whether or not have a gut instinct directive, your next step is research (Guido mentions that this is quite boring).

Google Analytics: hopefully you have enhanced ecommerce enabled. Are there products that have a higher bounce rate? What is the drop-off in your cart? In the checkout? What are people searching for? Where is the audience dropping off?HotJar/FullStory: what do people click on? How do they browse the website? Are they getting stuck in a particular part of the website? does it seem like they can’t find a product?User interviews: what is your moment of inspiration? What are your pain points on the website?

Better yet, ask to talk with people in-person, and watch them use your website. You’ll quickly find where they get stuck.

There are also agencies that will perform user studies.

With this research, you should now have a list of improvements. Prioritize them. Go for the biggest fish first. But, don’t just run to an A/B test!

Try to figure out several different ways to improve it.”

Guido Jansen

Don’t just stick with your first idea and run with this. Exercise yourself to come to multiple solutions.

Expand beyond your customers.

Be creative in locating those who have never purchased from you. For example, you could analyze your email list against your list of purchasers and find those emails that missing. Your customers worked through how to make a purchase. The others didn’t and can represent a massive source of revenue if their hangup was a problem on your website.

A note on user interviews.

When doing in-person (or in-video) interviews, consider taking the anonymous route and asking them to start with a Google search and direct them to purchase from your competitors, first. In the course of this, direct them to your website. To make this method most effective, you will want to keep them from knowing that you work for XYZ brand, otherwise, they will be come slightly biased.

Case study

Guido did this recently. He was working with a gifting online store. They were sending products to others as a gift.

In the checkout, there was a field that was required: the phone number of the person to whom you are sending a gift. Isn’t that normal to require? After all, UPS wants a phone number!

Oftentimes, it is the perceived requirements that are the biggest threat to losing customers.

Guido continues that in the first couple of user interviews, they found this to be a problem:

We obviously know the address of the person we are sending the gift to. But, their phone number is either unknown or in an entirely different location (our phone).This gift is a surprise. Are you going to call this recipient to tell them that their chocolates have been delivered? Worse yet, a text?I don’t want to sign you up for a “gift that never stops” in the form of spam text messages for years to come.

Guido shares this as a problem that is not found in Google Analytics. Yet, other problems might be only discovered in Google Analytics. That is why a comprehensive research phase is important.

If you think you have a good idea who your customer is, the reality might be quite different.

Guido Jansen

Traps to avoid in A/B testing.

Don’t end a test too early.

You need a lot of people in your A/B tests to determine which one is a winner. If you chose a route where there is not a substantial improvement, it could be the worse alternative—and your wonderful testing mechanism just failed you.

Use an online calculator (Optimizely has one) to determine your sample size. Instead of letting your test run for a set amount of time, focus on the number of samples collected.

Run for a (likely) maximum of 4 weeks. After four weeks, cookies begin to be reset and users will start being counted.

If you have enough traffic, you could consider splitting that traffic into segments and running each segment for 4 weeks.

Test things that matter.

While color tweaks or whether or not to round button is interesting—will it actually make a difference in your conversion rate?

If your call to action button’s contrast is poor against the rest of the website, then fixing it will help. If the rest of the website has styling that is smooth and rounded, then making your button’s corners rounded might help.

But, just changing this to change it will likely yield no perceivable changes. Or, based on previous notes, you are free to stop the test wherever it provides the most positive outlook to furthering your career as nothing substantial will be gained.

Have proper expectations.

Booking.com constantly runs A/B tests. You might say they are experts. They have 75 product teams working on tests. Every 10-12 months the teams are completely rebuilt so that everyone stays fresh with new ideas.

Their success rate for A/B tests is 10%. Get that.

Granted, they have a mature A/B testing system and have already dealt with the low-hanging fruit.

Is a 10% success rate too small for you? If the alternative is 0% success rate (by not trying), you have to agree that 10% is better than nothing. Remember that these successes compound. Two successes = 21% improvement (better than 2 x 10%).

What if I don’t have enough traffic to run A/B tests?

User interviews are your answer. You should be doing this whether or not A/B tests make sense. Talk to customers. Hear what they have to say. Watch them navigate your website.

Getting upper management on board.

A/B testing isn’t the end-all. It is a means of improvement. It usually won’t generate new ideas.

If I would have asked people what they wanted, they would have told me faster horses and not cars.

Henry Ford

If upper management has bad ideas but is confident that they will work, you should run those tests. Make sure to report the success rate back to them. Once they have the data showing that customers don’t like this idea, they will often rethink. However, we must have the humility to recognize that it might work. Or, we must be flexible to implement how this person wants it implemented.

Of course, you want to make sure that your sample size (number of visits on each version of the test) is accurate and you are confident in your results.

But what if this test actually reduces sales while it is running?

Yes, this is a big deal. Many people in the optimization industry recommend never looking at a test while it is running.

Guido recommends letting it run for 2 weeks, then reviewing to see how it is performing. If there is a major under-performer, you want to get that dealt with ASAP.

And if it fails?

You are one step closer to better understanding what your customers want or don’t want. That’s really good.


The post Episode #13: putting data behind your gut instinct’s voice. appeared first on Actionable Insights.