Jump to content

Product Analytics/Glossary

From mediawiki.org

This page lists terms related to product analytics, which is "the process of analyzing how users engage with a product or service."[1] It documents the terms' definitions, and in some cases their usage and best practices. The aim is for Product Analytics, Data Products, and other teams in Product and Technology to have a shared, clear, and well-understood vocabulary when operating in this domain.

Terms

[edit]

Baseline

[edit]
Noun
Measurement of a metric that is then used as a point of reference/comparison when looking at future measurements of the metric and looking for changes relative to the baseline.

Like when you have an establishing physical as a new patient and they have you do a lab panel to establish baselines so that they can notice any changes over time.

Experiment

[edit]
Noun
Test of a hypothesis, usually (but not always) scientific, providing trustworthy and generalizable data. Deliberately imposes an intervention on subjects with intention of observing what response/outcome that imposition leads to, different from an observational study.

Experiment orchestration

[edit]
Noun
Code/software responsible for enrolling users into randomized experiments and assigning those users to groups specified in the experiment design.

"To set up an A/B test" involves:

  • instrumentation either with new or reusing existing instrument(s)
  • either setting up the feature flags and configuring the experiment if centrally coordinated by the Metrics Platform (or other coordinating solution) in which case the orchestration is handled by that
    • or orchestrating it if not using the Metrics Platform, in which case this would be handled by the team running the experiment using their custom solution

Guardrail metric

[edit]
Noun
Metric that we monitor in addition to the Primary Metric(s) and any Secondary Metric(s) to make sure that the intervention we are testing does not have a negative impact in ways we care about – that is, to make sure we are not going off the rails.

These are assurances that the intervention does not have unintended negative side effects on the feature or more generally.

Examples: Time to First Byte, Time to First Contentful Paint

Hypothesis

[edit]
Noun
King et al. (2017) defined this as "A testable prediction of what you think will happen to your users if you make a change to their experience,"[2] and recommend the following framework for a constructing a hypothesis:

For [user group(s)], if [change] then [effect] because [rationale], which will impact [metric of interest].

Instrument

[edit]
Noun
An instrument is a piece of code/software that collects data, usually in the form of events or logs. A feature's instrumentation may encompass multiple standalone instruments – a performance tracking instrument, a clickthrough tracking instrument, and a dwell time instrument, for example – or may have a single instrument that is responsible for collecting a variety of performance and usage data.
Verb
"To instrument a feature" means adding instrumentation to a feature so that certain moments (e.g. feature loading, user interacting with buttons, data being transmitted) are logged in some way – usually coupled with some relevant-to-the-moment information – that allows us to measure metrics which give us insights into the behavior of the feature and how users engage with it.

This is different from a survey, which is typically the data collection instrument used in qualitative research.

Instrumentation

[edit]
Noun
The part of a feature, product, or service that is responsible for collecting data about that feature, product, or service. Instrumentation can refer to one instrument or a set of multiple instruments.

Usage:

  • "Nothing ships without instrumentation."
  • "Instrumentation is a low priority for us."
  • "We are working on instrumentation for this feature." (Alternatively: "We are instrumenting this feature.")
  • "Who wants to take care of the instrumentation?"

Leading indicator

[edit]
Noun
Metric that responds early to intervention. Highly sensitive. Can predict future performance – if you are seeing positive results in a leading indicator, you are likely to see positive results in the primary metric.

Examples: clickthrough rates, newly registered user counts

Primary metric

[edit]
Noun
The main outcome you are trying to impact through your intervention. This is what you are primarily using for evaluating your hypothesis and for deciding whether to deploy an intervention more widely or iterate and test again or abandon the idea entirely.

In the past we have sometimes called these key metrics, success metrics, and key success metrics.

Best practices: a single primary metric of interest. Everything else should be a secondary metric or a guardrail. The choice of metric should (1) be supported by a theory of change for the product and user behavior, and (2) be sensitive and timely to the planned intervention.

If you have multiple primary metrics (no more than 5), you should consider the potential tradeoffs and try to make them explicit. What if the experiment has a meaningfully positive impact on one primary metric but a meaningfully negative impact on another, rather than positively impacting both?

Note: In GrowthBook these are called Goal Metrics.[3]

Product health metric

[edit]
Noun
Key performance indicator (KPI) for measuring and monitoring performance and success/impact of a product. Provides quantitative insights into how users are interacting with and responding to a product and its features.

Rolling metric

[edit]
Noun
Metric that is defined over a window that moves.

Best practice: rolling metrics are best used for monitoring state of things, not for experimentation.

Secondary metric

[edit]
Noun
Used to learn about additional impact of the intervention in an experiment, but are not primary targets of the intervention. They reveal side effects (both positive and negative) of trying to improve the Primary Metric with the intervention.

Trailing (lagging) indicator

[edit]
Noun
Metric that takes a long time to respond to intervention and by the time you have measurements, it is too late to change anything.

Examples: user retention rates

See also

[edit]

References

[edit]

Rodrigues, J. (2021). Product analytics: Applied data science techniques for actionable consumer insights. Addison-Wesley.

Kohavi, R., Tang, D., & Xu, Y. (2020). Trustworthy online controlled experiments: A practical guide to A/B testing. Cambridge University Press.

  1. https://www.atlassian.com/agile/product-management/product-analytics
  2. King, R., Churchill, E. F., & Tan, C. (2017). Designing with data: Improving the user experience with A/B testing (First edition). O’Reilly Media, Inc.
  3. https://docs.growthbook.io/using/experimenting#metric-selection