the A/B tester stops the test as soon as the results are comfortable and fit the opinion

Connect Asia Data learn, and optimize business database management.
Post Reply
Nihan089
Posts: 12
Joined: Mon Dec 23, 2024 3:18 am

the A/B tester stops the test as soon as the results are comfortable and fit the opinion

Post by Nihan089 »

Andre Morys, Web Arts
“Understand that most experimental results are wrong. There are still too many experiments being done to prove someone right or someone else wrong.”

“In most cases,. Psychologists call this ‘confirmation bias’ and most optimizers suffer from it.”

“You can avoid that by strictly separating the person who delivers the idea for testing from the person who decides when an experiment will stop.”

For example, if you suspect that a change to your product pages will increase clicks on the “Add to Cart” button, you’re more likely to be influenced by that when evaluating the test results and looking to confirm your assumptions. This could mean stopping the test early, running it for too long, interpreting the results inaccurately, etc.

That’s why you should run your tests for at least two business cycles and until your pre-calculated sample size is reached. It also helps to avoid “peeking” at your tests while they’re in progress.

Be careful with testing tools! Many will tell you that the test is complete because statistical significance has been reached, which may lead you to stop the test prematurely. Statistical significance alone does not really indicate the validity of the test, so don't be fooled.

Finally, HubSpot ’s Alex Birkett reminds canada phone number sample us to be aware of the narrative fallacy, another cognitive bias, and how it impacts our test analysis:

Alex Birkett, HubSpot
“You can't really explain 'why' something worked or failed.”

“Yes, you can theorize and view the results of your experiment through a framework, lens, or heuristic that seems to clearly explain a win or loss, but it’s really just a story you tell yourself to simplify things after the fact (this is called the narrative fallacy ).”

“For example, you might tell yourself that a certain testimonial banner experience worked because ‘our audience needs reassurance to make decisions, and this social endorsement helps fill that gap.’ And that might be true, but it could also be explained as ‘our audience’s attention was drifting away from key product elements on our page, and this testimonial banner helps direct their attention to the right place.’ Same test, same result, but a different story.”

What's the point of this? Why can't we just tell each other stories? What's the problem?

“The problem lies in the accumulation of evidence that reinforces confirmation bias. If you build up too much confidence in your predictive capabilities, you tend to avoid certain experiments because they don’t fit the narrative you’ve built about your audience and your CRO program. ‘This variation won’t work because the color blue is associated with sadness and our audience needs to be motivated with energetic colors’ isn’t really a valid reason to rule out a variation in the experiment.”

“Andrew Anderson said this in a CXL post and I love it: 'The moment something is “obviously” wrong or something is going to work “because…” is the moment your own brain shuts off. It’s the moment our own good intentions shift from doing what’s right to doing what feels best.'”

“My examples are super simplistic here, but they’re meant to illustrate this: stay humble, stop worrying about the explanation or the story behind your test. Instead, worry about the efficiency and ROI of your program, and how you can improve those aspects.”

Why does the story you tell yourself about what works and what doesn’t matter? It’s not enough to be aware of the cognitive biases that impact your ecommerce visitors and customers. You, too, are subject to those same cognitive biases, and they will creep into your testing results if you’re not careful.
Post Reply