Why brands are failing to optimise and experiment with conviction

Many Australian brands without dedicated digital experimentation teams, or an external agency to offer support with A/B testing, are missing opportunities to capitalise on insights, says James Spittal. With data-driven decision-making, businesses can convert more leads, increase average order value and reduce cart abandonment. 

Falling short when it comes to testing and optimisation is no longer an option for companies interested in rapid growth and gaining market share. Those who fail to listen and act on customer data, instead relying on gut instinct, will ultimately fail. A few standout examples are Kodak, which filed for bankruptcy in 2012 but has since made a resurgence; and Borders, which went into liquidation in 2011. A non-customer centric approach meant that both companies struggled to innovate at the speed of their competitors, as they failed to recognise what current and prospective customers demanded. 

The missing revenue potential can be quantified too. According to Forrester, companies who are harnessing the power of optimisation are growing on average 30% more annually and are expected to earn $1.8 trillion by 2021. 

Successful optimisation and digital experimentation requires practitioners with a deep knowledge, unique skill set and years of practice. The failure for many companies to capitalise on essential data, for example, is due to a number of hurdles they face in early-stage optimisation journeys – hurdles which require deep expertise if they are to be overcome.

Most tech and digital product teams have years of experience in UX, front-end development, web analytics, SEO and SEM. They are often investing in, and deploying, optimisation tools, however, they generally lack the skills needed to run high velocity, reliable digital experiments. 

When you don’t have the right people behind the wheel, things can go wrong. In the case of experimentation and optimisation, this usually comes in the form of a lack of persistence, failure to understand the volume or continuity of experiments that should be run; or failing to recognise when bringing in an external agency is the best way forward. 

Persistence

Some enterprises I have come across only run one or two experiments per month, which is a cardinal sin as it will give them little-to-negligible results. 

Much like going to the gym, optimisation needs to work constantly to improve muscle – if we only go to the gym once a month, we’ll never see changes. In the same way, running experiments is essential for company growth and avoiding stagnation. 

The mathematics behind high levels of testing are clear. According to well-known A/B testing and personalisation vendor, Optimizely, companies who run one to 20 experiments per month are increasing revenue by 1-4%. However, companies that run a minimum of 21 experiments are most likely to drive revenue increases of 14% or more. Furthermore, according to Optimizely and VWO, the average ‘win rate’ for A/B tests is between 7% and 10%. Therefore, typically, for every ten tests that a company runs, only one of those will be a statistically significant winner. Companies who complete only one or two tests a month will only have a few winners a year and lack the solid data needed to meet increasing demand, satisfy managers and prove ROI to shareholders. 

In the words of Amazon’s Jeff Bezos, “our [Amazon’s] success is a function of how many experiments we do per year, per month, per week, per day.”

The right people

Companies either need high velocity executional partners and dedicated resources for test development, or strategic partners to train digital teams to have a deep understanding of audiences, segmentation and results.

Without highly skilled internal specialists, it isn’t feasible to experiment successfully without an agency. Optimisation agencies have the dedicated resources and expertise essential for valuable analysis and execution. 

It’s important for brands and optimisation agencies to work together towards common goals. At Web Marketing ROI, for example, we like to get our hands dirty and work directly with teams to execute their digital optimisation programme successfully. This includes lots of face time and working regularly in client offices. 

Optimisers see our role as being equal parts execution and enablement, as the latter is essential for internal teams to be in a position to interpret data and make educated, quick decisions based on the data that’s in front of them.

The right strategy

Optimisation strategy isn’t necessarily one-size-fits all. Good agencies and brands will adjust the strategy to suit the technology, resources and stage. 

Those businesses at the beginning of the optimisation journey, with no dedicated conversion rate optimisation (CRO) team or testing technology in place, will benefit from an agency running a testing tool evaluation process to determine the best platform for optimum results. 

Companies with testing technology in place, but no dedicated CRO team making the most of it, can utilise a dedicated agency to help develop, implement and interpret tests and personalisation campaigns. 

Finally, those who have a dedicated CRO team and technology in place, but aren’t getting the results they want, typically need help with dedicated resources across front-end development, UX, UI, analytics and data science delivered on-site. 

Getting it right

Without dedicated and deeply skilled experts, companies are missing out on the growth opportunities that come with successful experimentation and optimisation. 

Whether it’s building an internal team, hiring an external agency or both, recognising the skills and person-power needed to run experiments successfully is by far and away the most important piece to the optimisation puzzle. 

It’s not the tools that count, it’s how you use them – and the businesses that get this right will win the optimisation race. 

By James Spittal is CEO at Web Marketing ROI

 

Further Reading: