Show Directional Results

I was once asked to help marketing set up an A/B test to evaluate the effectiveness of their email series aimed at converting users to start a trial and become paying members. I was pulled away to work on other projects and we had to wait until a marketing data analyst was hired before the A/B test could be evaluated. This is when we discovered the test hadn’t been set up correctly after it had already been running for 6 months. The control and test group proportions weren’t a 50/50 split as we had originally intended.

If this had been a class on A/B testing, you would’ve received perfect test data with the proper 50/50 split, enough users in the sample size, and proceeded to evaluate statistical significance. All of these parameters weren’t met for the actual email test. We couldn’t tell marketing we had to rerun the test and wait another 6 months. How did we salvage this test with imperfect data?

Statistical significance was thrown out because the data didn’t meet the criteria for a proper A/B test. Cohort analysis was the only way we could come up with to salvage the results. Users were segmented between the control and test groups and then broken down by users that clicked or opened the email versus those that didn’t to show product engagement, trial start rates, and conversion to paying members.

In the stakeholder presentation, results were noted as directional only and that they didn’t meet the criteria for statistical significance. Marketing was happy though because the test group showed higher engagement and trial start rates compared to control even if it wasn’t significant. The reality is stakeholders need to report results to their boss and as long as they’re positive even if it’s not statistically significant, it’s better than showing negative results.

You may wonder how we would’ve dealt with the results if the test group had ended up with lower engagement versus control. Then we might’ve tried analyzing each email in the series to identify ones in the test group that had lower engagement versus control or ones that had a lower trial start rate. There are endless possibilities for an A/B test to go wrong and adapting to show directional insights is one way to salvage results.

Takeway: In absence of perfect data, segment your users to find directional insights. Stakeholders don’t need perfection. Sometimes guidance in the right direction is enough until better data comes along.

Adjust For Data…

Continue reading: https://towardsdatascience.com/what-you-dont-learn-about-data-in-school-2bf872ceda47?source=rss—-7f60cf5620c9—4

Source: towardsdatascience.com