What is an AB test?
An AB test is an analytics method used to compare different versions of a game and it aims to identify which one performs better. The main idea is to split your users into different groups: Group A or a Control Group, who experience the regular game, and group B, who get a tweaked experience (different balancing, UI, etc.).
Why do we do AB testing?
The main reason for doing an AB test is to measure the potential impact a change will have on a game. But why don’t we simply apply a change and see what happens? Here comes the fun part!
I will use a simple example: let’s imagine that we launch a new feature in Dragon City at Christmas and we see an increase in revenue. “Oh, this is perfect!” you may say to yourself; “the feature has worked, we don’t need to worry about anything else”. But what if Christmas is organically a better period? What if there was an amazing marketing campaign going on at the same time that brought us better users? What if there was something else out there that was affecting our metrics and of which we had no control or knowledge?
That’s the reason why we need AB tests: if you can randomly assign users during the exact same period in two distinct buckets and then change their individual experiences and compare their metrics, we should be able to distinguish between differences in the KPIs that they may produce without generating any bias results.
First of all we need a sound hypothesis, some questions for which we need answers. Then we need to choose a measure of success: which KPI are we trying to improve? ARPU, Conversion, etc.
The next step is to design the experiment: defining the groups and deciding which variation of the content each one will be presented with. Then, all the teams involved in the AB test process (client developers, BE, QA, etc.) will attend a kick-off meeting to align the teams’ objectives and ensure that the AB test is successful in every way possible.
Finally, we will analyze the results. It’s not as simple as just looking at a number; there are several statistical tests that validate if the difference between the groups is real or due to sheer chance. We are not going to explain all of the techniques used here, but we have made a lot of improvements and our AB testing is certainly in good hands. Hopefully the process will result in changes to the games that reflect the best possible option!
By: Sandra Saiz
Lead of Data Scientists