Ruby

Python

PHP

Java

Node.js

Go

.NET

A/B Tests

Welcome to the A/B testing documentation page of Beam Studio Dashboard. Our platform offers an efficient and robust system to conduct A/B tests for your recommendation scenarios. These tests are crucial for making data-driven decisions and optimizing user experience.

A/B testing on our dashboard allows you to compare the performance of two or more scenarios, which are combinations of filters, algorithms, and business rules. This feature lets you experiment with different recommendation strategies and quantitatively understand their impacts.

As you navigate through the A/B testing section, you'll find options to set up and manage A/B test splits and flows. Here, you can define the scenarios you want to compare, monitor the results, and gain insights to improve your recommendations.

This documentation will guide you on how to efficiently utilize the A/B testing feature to boost the performance of your recommendations and to make effective business decisions based on solid data.

Concepts

Definition

It is essential to make a distinction between “A/B Test Split” and “Scenario A/B Test”

  1. A/B Test Split This term refers to a backend concept that determines the distribution of users for testing. It's all about how we segment the users who will be subjected to a given test. Beam Studio's A/B test endpoints can be employed to evaluate any feature you wish to test, as long as the variables to test are appropriately set up on your end. While testing various types of recommendations is a common use case, our A/B testing capability isn't confined to this. It can also be used to evaluate other aspects such as banners, landing pages, user experience, and more. You can find more details about this in the section titled "More Than Recommendations" below.
  1. Scenario A/B Test: this component is a “scenario” that will point to and take into consideration the following elements
  • The “AB Test” unique ID which will refer to the A/B Test Split and define the distribution for the users
  • Scenario A - scenario associated with users send to group A by the “AB Test” component
  • Scenario B - scenario associated with users send to group B by the “AB Test” component
Note: One same “A/B Test” id can be used by several “Scenario A/B Test

Here is on a backend perspective how an A/B test Scenario would be structured

Here is an example of your dashboard listing all the "A/B Test Scenarios" associated with one "A/B Test Split"

Illustration

Let's clarify these concepts with an example. Suppose we have three A/B test scenarios named "Product Detail Page (PdP)", "Homepage", and "Cart". All of these scenarios are linked to the same A/B test split, denoted by a unique A/B Test ID.

So, when these scenarios are invoked, the system first checks which group (A or B) the user has been assigned to as per the A/B test split. Let's say a user has been assigned to Group A. In that case, whenever they interact with any of the three scenarios ("PdP", "Homepage", "Cart"), they will always encounter Scenario A associated with those scenarios.

In this example, the A/B test scenarios named "PdP", "Homepage" and "Cart" will all be mapped to the same A/B test split and therefore A/B Test unique ID. In other words, if those scenarios are being called, then any user assigned to the Group A per the A/B test split would received the scenario A across all those A/B test scenarios.
  • Multi-touchpoint Testing: Imagine you're testing a fundamental shift in your product recommendation strategy. Instead of performing A/B tests individually for different sections like "Homepage", "Product Detail Page (PdP)", and "Cart", you want to provide a consistent experience across all these touchpoints. Scenario A might be using a basic recommendation strategy, prioritizing popular products, while Scenario B might use an advanced strategy, emphasizing personalized recommendations based on user behavior. Once a user is assigned to a group (A or B) based on the A/B test split, they will experience a consistent recommendation strategy throughout the entire website, ensuring a coherent testing environment.
  • Full Journey A/B Test: Let's say you wish to overhaul the whole user journey experience on your website. Scenario A could be the current setup, while Scenario B involves significant changes including a new homepage layout, different navigation menus, personalized content placement, etc. Once a user is assigned to a group (A or B) following the A/B test split, they will consistently see the same scenario throughout their whole interaction with the site. This allows you to assess the impact of broad changes on user behavior.

In these examples, the powerful idea is the ability to perform A/B tests not just on isolated sections, but across the entire customer journey. By doing so, you're able to examine how changes in different touchpoints interplay and impact overall user experience and conversion.

Creating A/B Test


Get started with Crossing Minds recommendation API

Crossing Minds Recommendation API is the easiest way to integrate personalized recommendation to your website & mobile apps

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
On this page
We use cookies (and other similar technologies) to collect data in order to improve our site. You have the option to opt-in or opt-out of certain cookie tracking technologies.To do so, click here.

Beam

API Documentation Center,
please wait a bit...