As product teams develop an onboarding strategy, they will take the time to develop a good activation metric to measure how “good” the onboarding really is. Onboarding serves to “activate” the user: it prepares them to use the product effectively and shows them the core value as quickly as possible. Onboarding is the first time a user will interact with a product, and the first impression will affect long-term retention.

What makes a “good” activation metric

When I was at Dropbox, we took the quantitative metric of a user getting value out of our product as long-term retention. Thus, the goal for our onboarding was for that user to achieve long-term retention. However, long-term retention is a terrible metric to use for onboarding experimentation: it takes way too long to determine if an experiment is a success. For example, if long-term retention is a year for the product, the team would then have to run the onboarding experiment for a little longer than a year to determine if it changed the target metric.

Instead, teams should develop an activation metric that correlates with long-term retention but reduces the time it takes to measure an experiment. This should be a qualitative metric that initially reflects whether or not a user captured the core value of the product, which in turn would organically lead to them being retained as a user in the long term.

A great example activation metric was Facebook’s 7 friends in 10 days. Facebook’s activation metric satisfied both the qualitative and qualitative requirements for a successful onboarding: it described a user achieving core value, which was to stay in touch with their social network, it was time-boxed to a snappy 10 days, and it correlated with existing long-term retention and engagement data.

To create an activation metric, look both to qualitative and quantitative research.

Qualitatively

It is easiest to create an activation metric when the core value of an app is obvious. If a product’s core value is helping a person create to-do lists, the activation action is likely creating a sizable to-do list. But what, for example, is GitHub’s core value? At a high-level, its core value is to help engineers build and maintain code. But during onboarding, it really depends on the type of user. For IT decision makers, the core value might be security and compliance. For a college student, it might be the ability to look up how others code a particular solution. The more complex a product, the more potential core values there are for its user base.

It is a high priority for teams to align and decide on this early. If the team doesn’t have significant user research to use already, the team will start there to understand the core value. The team will look at existing users, target markets and the company’s mission to make the decision.

Quantitatively

Quantitatively, what actions in the product do highly retentive users do in their first week? With the qualitative research in hand, look out for metrics that score well in a confusion matrix. That is: a good action will not only describe highly retentive users, but also clearly separates highly retentive users from non-highly retentive users. For example, 100% of highly retentive users will complete the action ‘create account’ in their first week. However, 100% of non-retentive users will also complete this action in their first week. So, ‘user creates account in their first week’ would not do well as an activation metric (and likely would not come up as a core value in the qualitative user research).

This is really the tip of the iceberg, as there are many more quantitative actions to answer. Can we compare actions’ precision and recall scores? Does the action have a causal relationship on retention, or is it just a strong correlation? Are there two actions in concert that describe activation, not just one? Is a week the correct length of time, or is it 15 days? Is this metric moveable, and is this metric breakable?

Two more things to look out for

Facebook’s metric was not only a good choice quantitatively and qualitatively, but was successful in that product teams could move the metric by improving the product, and long-term user retention stayed correlated with the activation metric. If it doesn’t stay correlated, the metric is considered to be ‘broken’. When creating an activation metric, it’s difficult to ensure it will stay correlated with long-term retention as teams work at moving the metric, and so it is very important to monitor that the chosen activation metric and long-term retention do not become uncorrelated (or negatively correlated!).

Lastly, not all activation metrics are as simple as Facebook’s was. For example, there may be three different personas achieving three very different core values from a product. In this case, an activation metric might include three actions, not just one, or repeating an action no fewer than X times.