Lesson 3:

How to Apply Conversion Design

With an understanding of what a conversion designer does, let’s look at the how to see the way work happens.

Effective conversion design requires careful research, planning, execution, and testing.

Research

To begin, you need to understand your audience inside-out. You need to know what drives them, what problems keep them up at night, and what language they use to describe their problems.

Planning

With your research in hand, you’re able to create the simple

First, we make sure each project has a measurable set of goals and objectives. These need to be more than “Grow the business.” They need to be more like, “Increase sales by 10% month over month.”

That’s a goal.

To achieve this goal, we set objectives. For the example above, we would ask, what contributes to increasing sales? Is it more traffic? Is it the types of content we publish? Or is it in how we follow-up when someone abandons their cart?

When we identify the factors contributing to the goal, we then turn them into objectives we can measure.

  • Get more traffic becomes Increase traffic by 30%
  • Publish better content becomes Publish content that drives 25% more leads
  • Follow-up when someone abandons their cart becomes Create an email autoresponder series to automate post-abandonment follow-up (sometimes an objective can be a “yes/no” measurement – did you create something or not? That kind of thing.)

Here’s a question I ask to see if we’ve properly designed our objectives or not:

Can we say, definitively yes or no, with data, whether this objective is complete?

If you can’t definitively answer with a “yes” or “no,” then you need to tweak your objective.

For example, say an objective is “get more traffic.” That’s a solid objective, but it’s not really clear. How much is “more”? Is it a single visit higher? And what time period are you talking about? You can’t definitively answer “yes” or “no” as to whether it was achieved or not, at least not in a meaningful way.

So revising it to “increase traffic by 30%” makes more sense. You can definitively say, “Did we increase traffic by 30%?” with a yes/no.

Execution

When it’s time to actually build what we’ve planned, we need to make sure measurement is built into it.

From setting up goals in Google Analytics to applying HotJar user tracking, there are a ton of ways to go about measurement.

Finding the right measurement tool depends on your goals, objectives, and technology stack. Contact your friendly neighborhood conversion design studio to chat about your specific scenario.

Testing

Since we set up everything to be measured, we can implement tests. We can compare landing page versions, email autoresponder sequences, and even individual pieces of content to see which one works best.

That’s the magic of measurement! By thinking in terms of measurable data and performance, we’re able to improve. Remember Drucker’s quote? “What gets measured gets improved.”

There’s a clear before/after state when you’re measuring conversions (and what contributes to conversions).

Like gathering any good data, you need to make sure your testing is not only well-designed, but yields statistically significant results.

Quick Aside:

In case you never took statistics, here’s what I mean by “statistical significance. From Investopedia:

Statistical significance refers to the claim that a result from data generated by testing or experimentation is not likely to occur randomly or by chance but is instead likely to be attributable to a specific cause. Having statistical significance is important for academic disciplines or practitioners that rely heavily on analyzing data and research…

Want to see this in action? Flip a coin.

Say you flip a coin four times.

  • First flip: heads
  • Second flip: Tails
  • Third flip: heads
  • Fourth flip: heads

With just these four data points, you might conclude that 75% of the time – three-fourths of flips – would result in heads.

But as you and I know, coin flips are 50/50.

If you were to flip a coin 100 times, you’d probably see the statistically significant probability win out. Half the time (50%) it would be heads, and the other half would be tails.

Thus, when running your own tests, you need to make sure your sample size – the total number of data points you’re using – is enough to yield statistically significant results.

Back to testing. Testing is the key to Conversion Rate Optimization. It’s the way to improve how well your designs perform and ultimately achieve your goals.