You are currently viewing Statistics For Marketers Cheat Sheet – Review  | Kanza Akhwand

Statistics For Marketers Cheat Sheet – Review | Kanza Akhwand

Do Marketers need statistics? yes. If you don’t understand basic statistics you can’t evaluate test results or case studies of A/B testing. So this article is a run through of basic Statistics for Marketers! I’ll cover

  • Sampling – Populations, Parameters, & Statistics
  • Mean, Variance, and Confidence intervals
  • What statistical significance (p-value) is and isn’t
  • Statistical Power
  • Sample size and how to calculate it
  • Regression To The Mean & Sampling Error
  • and 4 Statistics Traps to Look Out For

Sampling

Sampling is needed mostly for A/B testing since you can’t test the true population most of the time so you need to test a sample population of your users. Digital has opened the floodgates for A/B testing and you can test pretty much everything. For example you have two Subject lines for an email campaign and you want to test out and see what will be your conversion rate on either, what should your sample size be?

Before every test we formulate a null hypothesis which postulates the conversion rate of both subject lines will be the same.

Before running the experiment you need to establish three criterias:

  • Significance Level
  • Minimum Detected effect
  • Test Power

Significance Level is the probability that an effect happened by chance. So a 5% significance level means that if you declare a winner in your test, rejecting your null hypothesis you have a 95% chance of being correct. Minimum detectable effect: The desired relevant difference between the rates you would like to discover.
The test power: the probability of detecting that difference between the original rate and the variant conversion rates.

Mean

Mean is the average or the most common value ins a collection of numbers. Mean is calculated by adding all the numbers in a data set and then dividing by the number of values in a dataset.

Variance is defined as The average of the squared differences from the Mean. What is means is calculating the mean, subtracting mean from the individual numbers in the data set. Square the results and calculate the average. Why use squared? because if you just add up the differences from mean the negative will cancel out the positives.

Standard Deviation is a measure of how spread out the numbers are, and its calculated by squaring the variance. We can use standard deviation to measure what “normal” is and what falls outside the bounds.

Confidence Intervals are the amount of error allowed in A/B testing, or the amount of uncertainty in Statistics. Confidence intervals are often used with a margin of error.

P-Value

The p-value is a number, calculated from a statistical test, that describes how likely you are to have found a particular set of observations if the null hypothesis were true. P-values are used in hypothesis testing to help decide whether to reject the null hypothesis. For instant, in the previous example of the subject lines I expect at least 11% CvR but it can be more or less, how less is less. P Value answers how much your hypothesis was on point. An arbitrary value of 0.05 is defined for P Value at times, what the means is that anything under 0.05 means there is less than 5% chance that the result you are seeing is because of chance.

Confidence is calculated by subtracting the P Value from 1, so if the P value is 0.05 that means our confidence is 95. P Value tells you the probability of obtaining a false positive.

Statistics Traps/ Mistakes

Regression to the Mean and Sampling Error:

One of the biggest statistics traps is stopping too early, the experiment or A/B test if that’s what you want to call it must run for enough time that you find a clear answer. Regression to the mean is all about how data evens out. It It basically states that if a variable is extreme the first time you measure it, it will be closer to the average the next time you measure it. You can calculate how long a test should be with a test duration calculator

Too Many Variants:

Optimization should be hypothesis based not based on randomly throwing out experiments. Having too many variants also means the chance of a false positive is also greater. A lot of the common testing tools are made to remove the error from too many variants.

Click Rate and Conversion Rate:

The next trap we see often is forgetting that the end goal is the macro goal of conversions. If you have an increased click rate on one variant but it’s not ultimately helping your conversion rate then that means more people wont be purchasing. Select a main KPI before you start, if you select a lot of different ones it can get confusing. Also a lot of people tend to forget that KPIs are often tied together. For example if your KPI is increasing traffic most of the time this increase in traffic comes with a decrease in conversion rate. Also vice versa, if your conversion rate is increasing you need to check if your revenue/ orders are also increasing because maybe there is an issue and you aren’t getting as much traffic as you were before.

Frequentists vs Bayesian test procedures:

This one is more like a philosophiccal difference in Frequential Statistics and Bayesian Statistics. In summary, the difference is that in a Bayesianpoint of view, a hypothesis is assigned a probability,but in a Frequentist point of view,a test is run without a hypothesis being assigned a probability.

I hope you had fun reading this, I am attaching a glossary in case there are words you don’t understand.

Glossary:

  • Population is the entire group that you want to draw conclusions/ learn more about.
  • A sample is a specific group you collect data about.
  • Parameter summarizes an aspect of a population eg. Mean or Standard deviation.
  • Statistics are numbers that summarize data from a sample as a subset of a population.
  • Mean is the average or the most common value in a collection of numbers.

Part of my series on the CXL Institute Growth Marketing Mini Degree. See more posts below: