Skip to content
  • Home
  • University
  • Developers
  • API
  • Releases
  • Status
  • Home
  • University
  • Developers
  • API
  • Releases
  • Status
Home Heap Plays Using A/B Testing to Improve Usage
Getting Started Installation Administration Define & Analyze Analysis Examples Heap Plays Success Guides Integrations Heap Connect Data Privacy

Table of Contents

Was this article helpful?

Yes No

Thank you for your feedback!

Using A/B Testing to Improve Usage

You have a new feature, or existing one. Your gut feeling is that something is wrong or that by changing something in your product you can improve your feature: adoption, conversion rate, retention, reduce page errors and so on.

Step 1: Identify a Hypothesis to Test

Testing a hypothesis can be useful for any stage of the customer lifecycle! Improve acquisition, adoption or retention by leveraging A/B variations.

Determine which usage event you would like to improve usage of. This event will have the A/B variation associated with it!

Variations can be testing the color of a button, the location of the button, the wording on the button, the time in which the button is available to a user, and more.

Note

If you have multiple hypotheses, we recommend running one experiment at a time. Experiments typically should be run for a minimum of 3 months for meaningful data collection. Running a single experiment at a time allows you to properly analyze your hypothesis.

Step 2: Add Test Variations to Heap via Properties

A/B variations will be a property associated with the event you are experimenting with. There are 3 ways experiment data can be added to Heap: via direct integration, snapshots, or Heap’s addEventProperties API.

Below is an example of a built out property using an A/B variation brought in via the Google Optimize integration:

Note

We strongly recommend QA’ing your newly added property variations before moving forward with your experiment.

To ensure your variant properties are correctly firing with their dedicated event, use Heap’s Live view to QA. 

 

Step 3: Analyze your Variations

Once you have the experiment variations attached to the event you wish to test, you can begin to analyze how the experiment affects your user’s behavior.

Query #1Baseline Report: What is the conversion rate of your event pre experiment?
Funnel – Add appropriate events, including: a “Pageview” event where your experiment is, or will be located, as well as a “Conversion” event that will ultimately indicate if someone successfully interacts with your experiment

– Date Range: Select “Custom Date Range” to analyze a period before the experiment was live.
What does this tell you?
To properly understand if your experiment has had any impact, you will need this baseline report to compare subsequent reports to.  
Query #2Baseline Report: How does experiment affect usage goal?
Funnel Use the Suggested Report: How does my experiment affect conversion rate

To build using the Funnel analysis module:
– Add appropriate events, including: a “Pageview” event where your experiment is located, as well as a “Conversion” event that indicates if someone has successfully interacted with your experiment

– Click “Add Group By” to include the experiment property variations you created in Step 2

– Date Range: Select” Custom Date Range” to analyze the period in which the experiment was live
What does this tell you?
Understand your conversion rate percentage. This allows you to understand which variation has had the greatest impact, if any, on your user behavior. If conversion has improved from your original state, you can continue analyzing your variation before implementing changes. If Conversion has not improved, then you will want to create a new hypothesis to test.  
Query #3Compare Usage Counts
Graph– Count of usage event

– Click “Add Group By” to include the experiment property variations you created in Step 2
What does this tell you?
Directly compare usage counts of your goal event outside of a funnel to understand overall engagement.

Note on Segmenting Users

Compare relevant segments to see if your experiment has has any impact on a particular subset of users.

Step 4: Interpret your results and take action

Once you have proven (or disproven) your hypothesis, you can use the insights you’ve found to take action and make appropriate changes.

For example, you might use this data as leverage to ask for more feature-specific resources or budget. Alternatively, you might roll out a similar experiment to determine if any other features could benefit from the same treatment.

Disproving your hypothesis is also useful. If you disprove your hypothesis you know that your base assumption was not true. So you iterate and find a new hypothesis, and run the above steps again.

Conclusion

Experiments are a great tool that allow you to understand how your users will best interact with your tool. Use this data correctly to implement product improvements that will make your users’ life easier, while increasing your acquisition, adoption and retention.

Proving or disproving your hypothesis does not mean the work stops there! Continue iterating and continue to run experiments to create a reliable product and delightful customer experience. Even highly used features can do with an upgrade from time to time!

Have feedback? Let us know!

Was this article helpful?

Yes No

Thank you for your feedback!

Last updated February 16, 2021.

a/b testa/b testingheap playimprove usageoptimizevariant
  • Blog
  • Partners
  • Security
  • Terms
  • About
  • Careers
  • Privacy
  • Contact Us

© 2021 Heap, Inc.