Back to all posts Back

5 Questions & Answers On Client vs Server-Side Tests

5 Questions And Answers On Client vs. Server-Side Tests

Experimentation cultures are built upon a willingness to test, iterate, and optimize product experiences to the needs of the customer. The question for product, marketing, and engineering managers becomes – how do you go about doing that?

We recently hosted a webinar outlining the differences between two experimentation solutions – client-side testing and server-side testing. The key takeaway from the webinar is that there are pros and cons to both options, and that it’s best for your team to decide:

  • Why you need to run experiments
  • What types of tests you want to run
  • How you should measure the effectiveness of those tests

All of that being said, we did receive a few questions from the webinar that we’d like to answer in greater detail. This post will answer those questions as best as possible.

How do you run experiments with a small engineering team?

There’s a running myth that companies with large engineering teams have enough resources and bandwidth to do more complex forms of testing. Those myths strongly hint that server-side testing is the optimal solution in those instances.

However, while there is some truth to that myth, the reality is that it all depends on how much bandwidth there is within your engineering department. It helps each individual product or marketing team to determine the complexity of the experiment in question, and then align with engineering on whether this particular test is something that will require a lot of their time.

If the answer is ‘yes, it will require a lot of their time,’ ensure that there is enough bandwidth among the engineers who are very familiar with the ins and outs of the server to implement a proper server-side experiment. On the other hand, if the answer is ‘no, those experiments don’t require a lot of technical support,’ it would benefit the product/marketing teams to implement more forms of client-side testing. This allows the lighter, less technical experiments to run independent of the engineering department, and allows engineers to focus solely on the elements of the experiment that require their support.

What’s the ideal length of time to run experiments?

This is another ‘it depends’ type of question.

Some of our clients have been able to build out all of the requirements for a server-side test in as little as 3-5 days of development work. In other instances, where the experiments are more complex, the setup time has required a full sprint to implement.

In terms of running the experiment itself, the minimum amount of time recommended for either a server-side or client-side experiment is 2 weeks. This gives you enough leeway to monitor user behavior on the page or in the app over both weekday and weekend periods, allowing you to analyze peaks and lulls in traffic.

The key component to consider is the size of your user base. To draw real conclusions from the experiment, you need enough traffic volume to understand what works and what doesn’t. If there’s a lot of activity on your site, you can draw those conclusions closer to that 2 week experimentation period. On the other hand, if activity is low, you may require more time to draw concrete conclusions.

How do you run cross-channel experiments?

The ability to manage SDK devices is a key variable in answering this question. You want to make sure that your own team is sufficiently capable of implementing and monitoring experiments across all of these devices to get a real cross-channel understanding of how users respond to your experiments.

Suppose you have dedicated resources to individually support web, mobile, and OTT experiments. If that describes your situation, you can probably manage those various tests using a client-side solution.

However, in this particular use case, server-side testing could be the simpler solution. A team that can implement cross-channel experiments directly from the server can simplify the development process, the resources required to build the experiment, and capture all of the user behavior data in one centralized location. If your team has the bandwidth to build a server-side experiment, this would be the ideal approach to a cross-channel test.

Can you explain why the “flicker effect” is such a concern?

As we stated in the webinar, client-side experiments can cause what’s known as the “flicker effect.” Basically, this is a result of a live webpage or in-app experience rendering on a device, and then suddenly becoming overridden by the elements of the experiment.

When a device is loading an experience, a network request is sent to the server that hosts the experience in order to render the on-page layout onto the device. But a client-side experiment sends a second network request to pull in the elements of the experiment itself. This second server request is what’s known as the “flicker effect.”

Users will notice the change if messaging or visuals suddenly update on the second server request. There may also be an effect on page or app loading times due to the multiple requests sent to the server.

What kind of SEO impact is there from either form of experimentation?

SEO implications are a common question raised by our clients, particularly when it involves web or mobile site tests. There are notable differences between client-side and server-side testing and the SEO effect.

For client-side experiments, the good news is that changes to the javascript on the page typically are not indexed by Google during the experimentation phase. That means the experiment itself is unlikely to appear in or have influence on SERP rankings. The content within the experiment will only become fully indexed if it’s rolled out in full across the site experience.

That being said, since client-side testing does often cause the “flicker effect,” there are common impacts on page or app loading times. Site speed is a critical component of Google’s decision to rank sites appropriately. As a result, client-side testing can impact site loading times which, by extension, can impact your SEO.

In contrast, server-side testing has no impact on site loading times. However, since the experiments are implemented at the server level, they are far more likely to become indexed by Google. This means that your experiment could render in SERPs and indirectly influence SEO.