Post Image

How to Use Atomic Design to Improve A/B Testing

by
Yona Gidalevitz
Yona Gidalevitz

Contrary to popular belief, A/B testing was not invented for design or marketing. “A/B test” is just a colloquial way of saying “controlled experiment”, and those have been around for centuries.

You may be thinking, what does that have to do with my ability to conduct one?

Well, the key to any successful (read: valid) controlled experiment is the scientific method.

Unfortunately, A/B tests don’t always follow the scientific method. A true controlled experiment must:

  • Define an explicit hypothesis
  • Maintain fundamental control over the test environment
  • Define and separate dependent and independent variables
  • Identify and eliminate confounding variables
  • Be reproducible

At Codal, our UX design teams recently experimented with a number of ways to apply atomic design towards controlled A/B testing.

Based on concepts borrowed from chemistry, atomic design describes the complex (and simple) relationships between micro- and macro-design elements.

In this article, I will discuss this unexplored use case for atomic design, and show you how to use it for the most scientifically driven A/B tests around.

1. Use Atomic Design to Define an Element-Level Hypothesis

Atomic Design makes it astoundingly easy to get very, very specific with your hypotheses. If you’re unfamiliar with the way in which atomic design is structured, you can think of it like this.

image00

Above: atomic design representation by Brad Frost

Think of any site you visit frequently, or even one you’ve worked on. There are tiny elements, and there are massive elements. Often, tiny elements fit into (and make up) massive elements. In fact, this is true in most cases.

In conjunction with the above image, let’s define the variable types used in A/B tests.

  • Atoms: these are the absolute smallest elements on a page. Think icons, buttons, text fields, etc.

  • Molecules: these are complete elements; they consist of multiple atoms. Think search bars, option toggles, drop-down menus, etc.

  • Organisms: these are complete structures; they consist of multiple molecules. Think navigation bars, blog grids, sidebars etc.

  • Templates: these are essentially wireframes; collections of implemented organisms.

  • Pages: these are pixel-perfect, themed implementations of templates.

When constructing a testable hypothesis, the biggest challenge facing an A/B tester is simply to craft an explicit hypothesis. The explicitness of a hypothesis depends entirely on the size of the variable being tested.

Here’s what a bad hypothesis looks like for a hypothetical non-profit site:

“Adding more information for visitors on the landing page will increase the number of donors.”

Now, let’s say you see a conversion rate increase of 10% at a confidence interval of 95%.

The hypothesis, however, is poorly defined, non explicit, and difficult to prove or reproduce. What specific information helped increase the number of donors? The increased conversion rate can hardly be attributed to one reproducible element.

The issue is very simple: a lack of explicit variables (independent, dependent, confounding, etc) in their hypothesis. As a result, the test was ambiguous, and thus, unscientific and far from reproducible.

So, how do you use atomic methodology to pick appropriate variables?

2. Atomic Design Breeds Well-defined Variables

Due to the nature of atomic design, designers can focus on the effects of components of all sizes and functions. Whether they’re small, or large, you have strong (and documented) control over every variable that you test, because atomic design categorizes site elements by their size.

The variable types listed above do not share identical levels of impact on the results of A/B tests, nor are they equally weighted in terms of scientific accuracy. When conducting an A/B test, the smaller the independent variable, the more scientific the test due to less dependencies.

There’s a simple explanation: larger variable types, like organisms, consist of multiple smaller variables—this means that you don’t know which part of the organism actually had an effect on the observed data.

Use this knowledge as you develop variables for your A/B test hypotheses.

Always break your independent and dependent variables down to their smallest sizes. For the most part, your dependent variables will be conversions, so it’s hard to break them down. But when it comes to your independent variables, the smaller the better.

There are many ways to illustrate this concept, but Codal’s UX designers like to use the navigation bar to explain the scientific validity of each variable type.

image01

Above: Call Potential nav bar – one of Codal’s projects.

If a UX designer were to conduct an A/B test on the above navigation bar, they may be tempted to make a blanket statement like this: increasing the visibility of the search bar will increase its usage.

The subsequent A/B test might consist of switching out the search bar for a different style or color, and measuring the impact.

But there is a danger in doing so.

Which specific aspect of the search is being measured for impact? It’d be difficult to answer that question. Is it the color of the bar? The placement of the text? The search prompt? It’s too easy to make a blanket change to an element, without doing so in an incremental fashion.

A better way to A/B test the search bar would be to follow this process:

1. Identify the conversion path you want to test

2. Figure out which elements are related to it. Start with larger elements, and work your way down to smaller and smaller components until you have a list of different “atoms” you’d like to test

3. Test each atom individually, and systematically to eliminate confounding variables

4. Make sure nothing outside of the selected conversion path changes while you conduct these tests – that is the mark of a truly scientific study

So, in the example above, rather than change the entire search bar, one ought to make a list of sub-elements (atoms) worth investigating, and test them individually. Such a list might look like this:

1. The search icon (color)

2. The text field (size; color)

3. Placement (center of header)

You could make an endless list if you tried. Regardless of length, keep in mind that each change to any particular element (color, size, etc) must be tested individually.

The point is to use the list above to make a good educated guess with your hypothesis, Sometimes, you’ll have multiple hypotheses. Often, the results from one test will spur a new hypothesis for another.

Such is the nature of science.

3. Atomic A/B Tests Generate More Reproducible Results

How many times have you seen / read this article: “Company A increased CTR by XX% by changing just one thing”? Have you ever tried to replicate those results?

Good luck.

I doubt the testers who made those claims could replicate their own results on another page. There are a couple of reasons for this—the most obvious being that you can’t just apply the same variable changes to a completely different test environment and expect consistency.

Not to beat a dead horse, but the other prominent issue is just a matter of explicitly defined variables. It would be much easier to generate reproducible results if the case study in question said something along the lines of:

Changing the color of the search button from blue to white will bring enough contrast to the search bar that users will notice it and engage with the search function more often. Here are 30 separate tests to show reproducibility.

Reproducibility is so important. It is validity. After all, there’s a reason for the fact that reproducibility is a major criteria for publication in scientific journals.  

Atomic methodology enables reproducibility by design.

The very nature of atomic design demands variable specificity. If you can show a particular “atom” to be tied to increases in one variable or another, then you’re giving readers a rigid framework for reproducing your results.

And if you’re not sure, it’s better to say “I changed this organism and it affected conversion rates, but I don’t know which molecules or atoms were responsible—here’s what was in there” than to say “changing this organism to a blue theme changed conversions”.

After all, was it the blue button? The white font on the blue background? The contrast created between the homepage and the search bar? There are many unanswered questions if you take the latter route.

Conclusion: Keep A/B Tests Scientific with an Atomic Mindset

If you want real conversion optimization, you have to be scientific and methodical.

And if you’re ready to take the “scientific plunge”, so to speak, there simply isn’t a better way to get the most scientific and methodological A/B tests than the atomic methodology. Not only is it easy to implement, but it gives your results validity.

Of course, atomic methodology alone will not guarantee valid results. You have to use it in conjunction with a rigid system of control over the test environment, as well as with incremental, systematic test practices.

Once you commit to all of the aspects of a scientific A/B test, you may discover that there is a wealth of easily attainable data at your fingertips, without the feeling that you’re taking a shot in the dark.

For more UX advice, download the free e-book Interaction Design Best Practices: Volume 1

fb-promo

Yona Gidalevitz

by Yona Gidalevitz

Yona is a technical researcher at Codal. At Codal, he is responsible for content strategy, documentation, blogging, and editing. In his free time, Yona is an avid guitarist, cook, and traveler.

Still hungry for the design?

UXPin is a product design platform used by the best designers on the planet. Let your team easily design, collaborate, and present from low-fidelity wireframes to fully-interactive prototypes.

Start your free trial

These e-Books might interest you