Heap Connect lets you directly access your Heap data in BigQuery. You can run ad-hoc analyses, connect to BI tools such as Tableau, or join the raw Heap data with your own internal data sources.
Best of all, we automatically keep the SQL data up-to-date and optimize its performance for you. Define an event within the Heap interface, and in just a few hours, you’ll be able to query it retroactively in a clean SQL format.
In order to start accessing Heap Connect data through BigQuery, you’ll require an existing Google Cloud project. After some initial setup of your project, all that needs to be done is to add our Heap service account as a BigQuery user, and share your Project ID with us. All of these steps are detailed below.
Before starting the Heap Connect BigQuery connection process, you’ll need to:
- Have a Google Cloud Platform (GCP) project. If you don’t already have a project created, you can learn how to do so.
- Enable billing in the GCP project. If you haven’t already done so, you can follow these instructions.
- Enable the BigQuery API. If you haven’t already done so, you can begin the process.
- Know the region you want to use (see Supported Regions).
- Decide on a name for your dataset (optional, default is project_environment).
These prerequisites are also outlined in GCP’s quick-start guide.
Next, proceed as follows:
1. Authorize Heap access to BigQuery
Within the GCP dashboard for your selected project, please visit
IAM & admin settings and click
In the subsequent view, add
firstname.lastname@example.org as a
BigQuery User and save the new permission.
We would prefer to be added as a BigQuery user per the steps above. At minimum, we need to be assigned to a dataEditor role, and additionally, have permissions for bigquery.jobs.create. See BigQuery’s access control doc to learn more about the different roles in BigQuery, and see this StackOverflow response for steps to grant individual permissions to create a custom IAM role for Heap.
2. Provide Heap Your Project Details
Once the GCP project is configured, you’ll need to reach out to your Customer Success Manager or email@example.com with the following information:
- Your app ID which can be found on Accounts > Manage > Projects > click on the Project and scroll down to the Environments section.
Project IDwhich you can find in the
Project infosection of your GCP project dashboard (make sure you’re in the correct project). In the screenshot below, our project ID is
- The dataset name if you don’t want the default. The Default is environment name is Main_Production.
- Your region: we support us, europe-west-2, and eu.
That’s it! We will follow up once the initial connection has been made. Please don’t hesitate to contact your Customer Success Manager or firstname.lastname@example.org with any questions.
You can learn about how the data will be structured upon sync by viewing our docs on data syncing.
BigQuery Data Schema
The data sync will include two data sets:
- <data set name – default to project_environment> – Includes views and raw tables. Views de-duplicate data but does not apply user migrations.
- <data set name>_migrated – _migrated – Includes views that apply user migrations.
For data accuracy, we recommend querying the views in the second data set, because these have identity resolution applied. If you want tighter controls over identity resolution (e.g. apply your own identity resolution), you can query the views in the first data set.
Each of the views (except for
all_events) is backed by a “raw” table with the name
<view_name>_raw. This means that every environment will have both a
users view and
users_raw table, for example. The views perform deduplication, as the underlying raw tables may have duplicated data introduced during the sync process.
Additionally, the users view filters out users that are from the user in an identify call. For that reason, we recommend querying only against the deduplicated views.
Starting Mar 17, 2021, we will partition new tables synced by using the
time column. Partitioning tables by time will result in faster and cheaper query execution when the
time column is used as a filter.
To modify tables synced before Mar 17, 2021 to partition by time, you will have to go through a three step process:
- Create a copy of the table
create table [dataset-name].[table-name]_tmp
partition by DATE(time)
as select * from [dataset-name].[table-name];
- Drop the original table
drop table [dataset-name].[table-name];
- Copy the temporary table to the new name
Use the BigQuery console to copy the temporary table to the previous table’s name
To learn more about partitioned tables in BigQuery, read the BigQuery docs here: https://cloud.google.com/bigquery/docs/partitioned-tables.