# Export metrics to S3

You can export contact-level data—such as attributes and reporting metric events—to your Amazon S3 bucket for further analysis, archival, or integration with your data warehouse. This article walks you through the setup and export process.

### Prerequisites

Before starting, ensure you have:

* Access to your Amazon S3 bucket
* Access credentials (AWS Access Key ID and Secret Access Key) with appropriate permissions
* Workspace Admin permissions in Bird
* Data Flows enabled in Bird

## Overview

**Step 1: Create an S3 Bucket (If Not Already Created)**

1. Go to the Amazon S3 console.
2. Click **Create bucket**.
3. Choose a name (e.g., `customer-data-exports`) and region.
4. Leave other settings as default or configure as needed (e.g., enable encryption, versioning).
5. Click **Create bucket**.

**Step 2: Set Up IAM Policy and User**

1. Go to the **IAM** console.
2. Create a new IAM user with programmatic access.
3. Attach a policy that grants write access to the S3 bucket:

```
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::your-bucket-name",
        "arn:aws:s3:::your-bucket-name/*"
      ]
    }
  ]
}

```

4. Save the **Access Key ID** and **Secret Access Key** for use in the next step.

## Set up data flow in Bird

* Navigate to **Settings > AI & Automation > Data flows**
* Create a new data flow with
  * Source: Semantic events
  * Destination: S3 exporter
* Click Create and continue

<figure><img src="https://3861485111-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FU9kiDiTGVD8kkbnKKyEn%2Fuploads%2FJBfSwUnrB2Mz4ZZ4YQr2%2FScreenshot%202025-12-02%20at%207.37.38%E2%80%AFPM.png?alt=media&#x26;token=ec2ab9d9-f65b-48a4-8c0a-6ebda30b1cad" alt="" width="375"><figcaption></figcaption></figure>

* In the **Configuration tab > Source** , select the following
  * Contact attributes
  * Event name
  * Start date and time in YYYY-MM-DDZHH:MM:00.000Z format e.g. 2025-02-02T05:00:00.000Z

<figure><img src="https://3861485111-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FU9kiDiTGVD8kkbnKKyEn%2Fuploads%2FLfMELebyu79UhsUfz7wl%2FScreenshot%202025-12-02%20at%207.39.12%E2%80%AFPM.png?alt=media&#x26;token=180b4f44-87e9-4549-acd4-742bea438c11" alt=""><figcaption></figcaption></figure>

* In the Destination configuration, provide the following
  * S3 Bucket Name
  * AWS Region
  * Data Format: Parquet, CSV or JSON
  * Prefix: The subdirectory in which your data will be written to. For example `bird/contacts/data`
  * File Name: Files are automatically named with a timestamp prefix and a sequence number suffix.Optionally, provide a custom name that will appear between these elements.
  * Max Records Per File: The maximum number of records each file should contain.
  * Date Format: Select the timestamp format to use in filenames.
  * Access Key Id: The access key id used to connect to your S3 bucket.
  * Access Key Secret: The access key secret used to connect to your S3 bucket.

<figure><img src="https://3861485111-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FU9kiDiTGVD8kkbnKKyEn%2Fuploads%2FWqAmNiLuShvPhe4E0kpZ%2FScreenshot%202025-12-02%20at%207.40.57%E2%80%AFPM.png?alt=media&#x26;token=c45b2e0e-975e-407e-97c1-07cdabc666e2" alt=""><figcaption></figcaption></figure>

* Click **Save configuration** to save the data flow.&#x20;
* You can toggle to Enabled for the data flow.

<figure><img src="https://3861485111-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FU9kiDiTGVD8kkbnKKyEn%2Fuploads%2FBhdQLQIi9MrCui5RevBB%2FScreenshot%202025-12-02%20at%207.42.03%E2%80%AFPM.png?alt=media&#x26;token=191b9888-80bc-43c1-a28b-0155f15dea8c" alt=""><figcaption></figcaption></figure>

* Click on **Run now** from 3 dots to run it manually after it is enabled

<figure><img src="https://3861485111-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FU9kiDiTGVD8kkbnKKyEn%2Fuploads%2F9ZrVqSMwNkPoy7eF6H0Z%2FScreenshot%202025-12-02%20at%207.44.55%E2%80%AFPM.png?alt=media&#x26;token=080d9985-8df2-469c-9a96-fb562358c71b" alt=""><figcaption></figcaption></figure>

* Go to **Schedule** tab to update when it should be run automatically e.g. @every24h or use a CRON format e.g. 0 0 0 \* \* \*

<figure><img src="https://3861485111-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FU9kiDiTGVD8kkbnKKyEn%2Fuploads%2F7R2aIrgiZnGtwR3aEzGt%2FScreenshot%202025-12-02%20at%207.43.00%E2%80%AFPM.png?alt=media&#x26;token=467a53d5-01bd-4116-83a7-479374a923a1" alt=""><figcaption></figcaption></figure>

### Define your destination file columns

To define specific columns and column names for your destination file, you can&#x20;

* Go to Configuration Tab > Transformation > Open transformation editor
* Go to Transformation Tab > Attrribute Mapping
* Click on the + icon
* Select the field on the left side dropdown based on source data
* Select or create a new field by typing in which indicates your column name in your destination

<figure><img src="https://3861485111-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FU9kiDiTGVD8kkbnKKyEn%2Fuploads%2FbZGFl0GwNTpDabFwbMi8%2FScreenshot%202025-12-02%20at%207.46.07%E2%80%AFPM.png?alt=media&#x26;token=fb16d3b6-593c-4545-9f09-dd2006eae8e5" alt=""><figcaption></figcaption></figure>
